Chris Padilla/Blog


My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.


    Full Page Video Across Devices with React

    Video on the web takes special consideration. It can be a heavy asset for starters. Due to that, if the video is a stylistic element on the page rather than the main focus, you'll want to have a fallback available while the video loads. And on top of it all, playback behavior may be different between browsers for mobile and desktop environments.

    When pulled off, though, they are an attention-grabbing style element. Background videos playing on hero sections of landing pages can set a strong tone right from the start of a user's visit to the site.

    Today, I'll share what I've learned while working with my own full-page video project. We'll tackle all the challenges and get a simple 8-second loopable video working across devices.

    Accounting for Devices

    Before setting up the elements, I want to do some groundwork. I'll need to account for two environments: mobile and desktop. In my case, I want a vertical video playing in a mobile setting and a horizontal video playing on desktop.

    To detect this after the component has mounted in React, I can reach for a library to handle getting the window width. Let's go with useHooks/useWindowSize

    
    const FullPageVideo = ({
      verticalVideoSrc,
      horizontalVideoSrc,
      verticalBgImageSrc = '',
      horizontalBgImageSrc = '',
    }) => {
      const isPlaying = useRef(false);
      const videoRef = useRef();
      const videoRefTwo = useRef();
      const pageLoaded = useRef(0);
      const [showPlayButton, setShowPlayButton] = useState(true);
      const { width } = useWindowSize();
      const [mediumSize, setMediumSize] = useState(false);
      
      // . . .
    }

    This isn't necessary, but for my case, I don't need the page to be fully dynamic. I only need the check for window width to happen on page load. So I'm also using a pageLoad count to keep track of rerenders.

    I'll then add a useEffect to handle updating the state of the app based on the width.

      useEffect(() => {
        if (pageLoaded.current < 2) {
          if (width > 800) {
            setMediumSize(true);
          } else {
            setMediumSize(false);
          }
          pageLoaded.current += 1;
        }
      }, [width]);

    With that in place, let's get the JSX written for the actual page elements:

     return (
        <div className="album-story">
          <div className="album-story-page">
            <div
              className="album-story-video-wrapper"
              style={{ display: mediumSize ? 'block' : 'none' }}
            >
              <div
                className="album-story-bg-image"
                style={{ backgroundImage: `url('${horizontalBgImageSrc}')` }}
              />
              <video
                preload="none"
                loop
                muted
                type="video/mp4"
                playsInline
                ref={videoRef}
                className="album-story-video"
                key={horizontalVideoSrc}
              >
                <source src={horizontalVideoSrc} type="video/mp4" />
              </video>
            </div>
    
            <div
              className="album-story-video-wrapper"
              style={{ display: mediumSize ? 'none' : 'block' }}
            >
              <div
                className="album-story-bg-image"
                style={{
                  backgroundImage: `url('${verticalBgImageSrc}')`,
                }}
              />
              <video
                preload="none"
                loop
                muted
                type="video/mp4"
                playsInline
                ref={videoRefTwo}
                className="album-story-video"
                key={verticalVideoSrc}
              >
                <source src={verticalVideoSrc} type="video/mp4" />
              </video>
            </div>
            <div className="album-story-play-button-container">
              <CSSTransition
                in={showPlayButton}
                timeout={2000}
                classNames="fade"
                unmountOnExit
              >
                <button
                  className="album-story-play"
                  onClick={onClick}
                  disabled={isPlaying.current}
                >
                  play
                </button>
              </CSSTransition>
            </div>
          </div>
        </div>
      );

    Note that I have two video elements on the page: horizontal and vertical.

    This seems like it could be a tradeoff. I'm opting to render both elements to the page, but am only hiding them by CSS. Wouldn't this lead to poor performance on page load if I try to download both videos to the browser?

    The way around this is pretty simple: Adding preload="nnone" to the video tag will keep the video from automatically loading on the page.

    There are tradeoffs there. It means a delay in the playtime of your video. The option starts loading the video once the play button has been pressed.

    A more sophisticated solution might be to use a service such as Cloudinary that will dynamically generate your video from the server. Generated videos can be cached and served up quickly. Not a sponsorship for their service, but just a consideration.

    In my case, I'll take the tradeoff. The video is only 8 seconds long and loops, so I'm not too concerned about load time.

    Fallback Images

    Likely, this is not as necessary since I'm not autoloading videos. However, on iOS Safari, I did find that the video image would not show on load. So I needed a fallback image.

    This is accomplished with simple overlays in CSS:

    
    .album-story-bg-image {
      z-index: -1;
      background-size: cover;
      background-position: center;
    }
    
    .album-story-page {
      flex-grow: 1;
      flex-basis: 100%;
    }
    
    .album-story-video,
    .album-story-play-button-container,
    .album-story-bg-image {
      position: fixed;
      left: 50%;
      top: 50%;
      transform: translate(-50%, -50%);
    
      object-fit: cover;
      width: 100%;
      height: 100%;
    
    }
    
    .album-story-bg-image {
      z-index: -1;
      background-size: cover;
      background-position: center;
    }

    Playing the Video

    There are guardrails in most browsers to prevent autoplaying media when loading a page. Any video or audio is dependent on user interaction to occur first.

    There are ways of working around this in certain cases. For video in particular, you may be able to get a video autoplaying if the video is muted. Mozilla has a great deep dive on the subject of handling autoplay scenarios dynamically.

    For my case, I'll wait to trigger the video on user click.

      const onClick = () => {
        if (!isPlaying.current) {
          isPlaying.current = true;
          song.current.play();
          if (videoRef.current && mediumSize) videoRef.current.play();
          if (videoRefTwo.current && !mediumSize) videoRefTwo.current.play();
          setShowPlayButton(false);
          setTimeout(() => setShowTapStory(true), 2000);
        } else {
          if (videoRef) videoRef.current.pause();
          if (videoRefTwo) videoRefTwo.current.pause();
          videoRefTwo.current.pause();
          isPlaying.current = false;
        }
      };

    iOS Considerations

    Chrome is my daily driver on desktop. When I went to test this, the behavior was not what I expected.

    We already covered the fallback image above.

    Additionally, playing the video would literally set it to a full-screen player instead of staying embedded in the web page.

    Thankfully, it's as easy as an attribute on the video tag to get this working: playsInline did the trick for me.

    <video
        preload="none"
        loop
        muted
        type="video/mp4"
        playsInline
        ref={videoRefTwo}
        className="album-story-video"
        key={verticalVideoSrc}
    >

    Voilà!

    With that, we now have a full-screen video working!

    (This is part of an upcoming project on this site. I can't show the results just yet, so I'll share them when it goes live!)


    Games as Hard-Work

    I just picked up good ol' Wave Race 64 for the first time in years and had a blast. It got me thinking about Jane McGonigal's Reality Is Broken.. Rereading it, I stumbled upon this passage:

    What a boost to global net happiness it would be if we could positively activate the minds and bodies of hundreds of millions of people by offering them better hard work. We could offer them challenging, customizable missions and tasks, to do alone or with friends and family, whenever and wherever. We could provide them with vivid, real-time reports on the progress they're making and a clear view of the impact they're having on the world around them.

    That's exactly what the games industry is doing today. It's fulfilling our need for better hard work—and helping us choose for ourselves the right work at the right time. So you can forget the old aphorism "All work and no play makes Jack a dull boy." All good gameplay is hard work. It's hard work that we enjoy and choose for ourselves. And when we do hard work that we care about, we are priming our minds for happiness.

    I'll chime in and say art does a wonderful job of this as well.

    Music, for example, checks off several of the different types of "Hard work" McGonigal highlights as present in games:

    • High-stakes work (giving a performance, recording)
    • Busywork (repeating a passage over and over)
    • Mental work (arranging and voice leading)
    • Physical work (eh, it depends here. I've played music that makes me sweat, but let's just say that the motor skills used are close enough)
    • Discovery work (learning new pieces or genres)
    • Teamwork (collaboration)
    • Creative work (composing)

    The timeline for achievement in art, though, is usually loads longer. Games provide an addictively quick feedback loop.

    Games also just do a dang good job of making failure fun. More on that later on in the book, but I'll leave it here for now.


    All of Me

    Listen on Youtube

    Why not take all of me?


    An Apple A Day

    🍎🐛

    This guy is already using the M4 chip.


    Myst Constraints

    Watch on Youtube

    Absolutely fascinating hearing the constraints on developing games and software in the early PC era. Linked above is Rand Miller discussing the way Robyn Miller would have to stack rendering 3D images for Myst back to back while he would go grab dinner.

    The whole interview is great for more of those nuggets. The memory constraint of CD ROM read speeds is one I hadn't expected.

    Amazing that, even with all the resources and speed available to us today, performance constraints remain a top-of-mind consideration for engineers. Albeit, now for optimization, rather than "will this even run at all?"


    LangGraph Email Assistant

    Harrison Chase with LangChain released a walkthrough of an AI app that handles email triage. For those who have already gotten their hands dirty with the Lang ecosystem, the structure of the graph is most interesting.

    On a high level, there's a succinct handling of tool calling. First, A message is drafted in response to an email. From there, a tool may be invoked (find meeting time, mark as read, etc.). Then, the graph can traverse to the appropriate tool.

    draft_message seems to be the heavy lifter. Tool calls often return to the draft_message node. Not unlike other software design, a parent component is ultimately responsible for multiple iterations and linking between child components.

    A few other observations:

    1. The graph entry point is through a triage node. Their example uses an LLM to determine the next steps based on message context. This can be error-prone, but is likely mitigated by the fact that this app uses human-in-the-loop.
    2. There’s a pattern for recovering from bad calls from the LLM. In this case, there is a bad_tool_call node that is responsible for rerouting to the draft_response node in the event that the agent hallucinates a tool. Another point of recovery!
    3. Rewrite Node: Message generation follows two different passes to an LLM. One to write an initial draft (“What do I need to respond with?"), and then a rewrite node. (“What tone do I need to respond with?“) A useful pattern for message refinement.

    You can find the graph code here. And here is the walkthrough, starting at the explanation of the graph structure.


    Oooo~ The Greatest Remaining Hits

    Everything about The Cotton Module's future space travel concept album "The Greatest Remaining Hits" is just... cool.

    I have to point out, in particular, the domain name: ooo.ghostbows.ooo. You're probably familiar with cheeky uses of country domains. "Chr.is", for example. It's lovely to see a general top-level domain in action (and an onomatopoeia, no less!)

    Do treat yourself to the tap essay. It's a delight.


    The Essay as Realm

    Elisa Gabbert in The Essay as Realm:

    I think of an essay as a realm for both the writer and the reader. When I’m working on an essay, I’m entering a loosely defined space. If we borrow Alexander’s terms again, the essay in progress is “the site”: “It is essential to work on the site,” he writes, in A Pattern Language: Towns, Buildings, Construction; “Work on the site, stay on the site, let the site tell you its secrets.” Just by beginning to think about an essay as such—by forming the intention to write on an idea or theme—I’m opening a portal, I’m creating a site, a realm. It’s a place where all my best thinking can go for a period of time, a place where the thoughts can be collected and arranged for more density of meaning.

    Any art is a portal. A painting, a song.

    Wonderfully, any creative space is a portal.

    The portal of all portals may just be the World Wide Web, where you can create solitary spaces as well as communal ones.


    Amiga Lagoon

    Clint on LGR gave a deep dive on Jim Sach's work with the famouse Marine Aquarium screensaver. What cought my attention is this beatiful predecessory, a cover art piece for the Amiga Brilliance paint program:

    💾🦩🌴

    More of Jim Sach's art for the Amiga on the Amiga Graphics Archive.


    Like Someone In Love

    Listen on Youtube

    Lately, I find myself out gazing at stars,
    Hearing guitars... Like someone in love~


    Going Fly Fishing

    🐸🎣

    Struttin', croakin', moanin'!

    Donut Tubing the prequel.


    Finished Work

    Robin Sloan makes a case for finished works, even while nurturing a feed:

    Sometime I think that, even amidst all these ruptures & renovations, the biggest divide in media exists simply between those who finish things, & those who don’t. The divide exists also, therefore, between the platforms & institutions that support the finishing of things, & those that don’t...

    Finishing only means: the work is whole, comprehensible, enjoyable. Its invitation is persistent; permanent. (Again, think of the Green Knight, waiting on the shelf for four hundred years.) Posterity is not guaranteed; it’s not even likely; but with a book, an album, a video game: at least you are TRYING...

    Time has the last laugh, as your network performance is washed away by the same flood that produced it.

    Finished work remains, stubbornly, because it has edges to defend itself, & a solid, graspable premise with which to recruit its cult.

    The ol' Stock & Flow. Can't have one without the other.

    In music, it's etudes and jams vs recitals and recordings.

    Or perhaps you prefer keeping a sketchbook while working on paintings.

    The secret I see working the best for folks is when they can gather up their flow and make stock out of it.


    Audible on Computer Chronicles

    I can't help myself from watching episodes of The Computer Chronicles. These are just tremendous time capsules.

    Here's a particularly fun one: Audible has been around longer than you might have though. Below is Stewart Cheifet introducing the service in 1999 (from the Y2K special!):

    Watch on The Internet Archive

    Amazing to see the sophistication of the player. Not too different from how the iPod handled automatically downloading and clearing your subscribed podcasts.


    First Impressions from Native iOS Development

    I was in need of a detour. I've been pretty focused on large projects in the web dev category. So, to scratch the curiosity itch, I hacked away at a very simple iOS app.

    I had a few questions I was looking to answer going in. I was largely curious about how big the gulf really was between web development and native.

    Additionally, what are the benefits of fully committing to an ecosystem? VS Code is fine-tuned for JavaScript and web development, while Visual Studio is a full IDE for C# and .NET. In that sense, I had some experience, but I was still curious about how this plays out on the Apple side.

    Note: I've only given this a couple of weeks of playing around in my off time. Thoughts here are first impressions, not expert opinions from a seasoned iOS dev. Take all of the following with a grain of salt.

    Swift

    Apple's programming language for their products is fairly quick to pick up. The tour of the language on the swift docs site gets you most of the way there, and most of it is analogous to any other language that supports classes.

    Unwrapping

    The one concept that takes getting used to is unwrapping. I'll do my best to explain concisely, since I don't want to distract too much from the rest of the post.

    Swift has an Optional type that you can denote with a question mark. If I were to try to use this value later, I would get a compile time error:

    let color: String? = nil
    
    print(color) // Warning: Expression implicitly coerced from 'String?' to 'Any'

    color as a variable has reserved space in memory for a value, but it's not guaranteed that the value is there. That is what the Optional type is communicating. Hence the warning when print receives an optional type.

    The swift way of handling this is to unwrap the value.

    if let color = color {
        print(color)
    }

    On the left side of the operator, I'm declaring a new name variable for the block scope. On the right, I'm assigning it the value of my original name variable. At the start of this line, I'm checking that a value exists with an if statement.

    By this point, if a value exists, Swift knows this is no longer an Optional value and we are safe to call print with color.

    There's even a shorthand since this is so commonly done:

    if let color {
        print(color)
    }

    Essentially, unwrapping is a fancy term for asserting that a value exists on the variable. The nice thing about swift as a statically typed language is that it will encourage you to check your values in ambiguous scenarios.

    XCode

    XCode has been interesting to use. Unsurprissingly, the typography and design is very Apple. In that sense, it's delightful.

    Like any new program, it's easy to be overwhelmed by the sheer number of menus and options to toggle. However, it doesn't take long to find the few options you'll use the most often.

    It's a seamless dev experience working with iOS apps here. Unsurprisingly, when the software is designed for a specific platform, it works really well with said platform. Spinning up a simulated app environment is quick and easy. UI wise, you have a couple of options between SwiftUI and UIKit when sticking to Apple support. And adding components from those libraries is as easy as drag-and-drop.

    One thing that takes getting used to is a UI interface for adjusting any elements I add to the Storyboard visual editor. When you're used to scanning documentation or skimming through the options in intellisense to find input options, it feels a bit slow and laborious to have to paw through menus to find where I can resize a button element. Perhaps there are other ways of working with elements that are more text driven. Either way, I'm sure it would just take some time to familiarize oneself with where all the menus are.

    One of the most interesting features lies in the intersection between Interface Builder elements and the hand written code. Interface Builder is the WYSIWYG style visual pane for adding elements to a view. However, you can connect them to your own custom written swift classes. Doing so even has the nice UI flare of simply dragging and dropping the element into a line of swift code.

    import UIKit
    
    class DetailViewController: UIViewController {
        // This line was added by dragging and dropping (while holding ctrl)
        // the imageView from the Interface Builder
        @IBOutlet var imageView: UIImageView!
        var selectedPhoto: String?
        var photoTitle: String?
    
         override func viewDidLoad() {
            super.viewDidLoad()
            title = photoTitle
    
            if let imageToLoad = selectedImage {
                imageView.image = UIImage(named: imageToLoad)
            }
        }
    }

    There is a certain magic there. Seeing a visual element and being able to hook into it this way.

    Impressions

    My assumption was that learning Swift would take a bulk of the on ramp time, and then developing on Apple's platform would be smooth sailing. I found the opposite to be true. Swift, at least at this stage in my tinkering, wasn't event the meat and potatoes of my development. It was largely working with interface building, laying out elements there, and then writing a few lines of swift code to get specific functionality.

    Perhaps that's unsurprising at the entry level. But I'll say, since a bulk of the benefit of native development is using native interfaces and components, it makes sense that I don't have to type nearly as much boilerplate or initialization code to get started. So in that sense, the language is not the barrier to entry.

    However, gettinng up to speed on platform development does take time here. I've come to find this more true with the more tooling I use. Languages are largely similar in their feature set. The frameworks and platforms, however, all require you to learn their way of developing. It's as true for iOS dev as it is for React. So, when learning this sort of thing, much of the time will be spent here.

    Largely, though, once that hurdle is covered, the all-encompassing IDE is a smooth experience. Apple is known for excellent design. So it's nice to know that, out of the box, I'm working with components that are elegant to use.

    All in all, a great first impression.


    Value Engineers

    A great write up by Dave Thomas on how the title "Software Engineer" simply doesn't cut it when it comes to sucinctly describing what the people in these roles provide:

    [A] good developer can sometimes manage to deliver that value without actually writing a line of code. Developers occupy a unique position in most companies, sitting at the confluence of many business units and their customers. Developers often have a broader picture of how the company works and how things interact that many of the business’ managers. Many times I’ve seen a manager deliver a requirement to a team, only to have the team respond, “we can do that, but why not just…?”

    So, if we’re engineering anything, it’s value, not software...

    The people who deliver value by iteratively refining software deserve to have a name for what they do. It isn’t programmer, designer, analyst, front-end developer, or software engineer. It’s bigger than that, and it’s more subtle.

    A great reminder at a time where the "programming" part is becoming more and more automated in our work. The contribution expands beyond the specific tools used to provide solutions.