Anatomy of a Line Field Animation
January 15, 2026 | 10 minutes (1930 words)What started as fun idea, and low-stakes desire to add a subtle wind movement background animation to my homepage, turned into an exploration of JavaScript canvas rendering, noise functions, planar waves, and the surprisingly rich parameter space of moving lines.
This post walks through the mental model behind the line field animation used as the background for this website, how it evolved from a static grid to an organic wind simulation, and learnings and reflection on creative coding through conversational iteration with LLM’s.
The Mental Model
At its core, the animation is simple: draw a field of parallel lines on a canvas, then displace each point along those lines based on mathematical functions that change over time to simulate wind.
The key insight is that lines are just sequences of points. If you can control where each point sits, you can make the line wave, ripple, or flow.
How Wind “Moves”
Wind isn’t modeled as emanating from a point source. Instead, it’s a traveling planar wave. A useful reference is ocean waves approaching a beach rather than ripples from a dropped stone. Each point’s position is projected onto the wind direction axis, and a sine wave sweeps along that axis over time:
1
displacement = sin(position_along_wind_axis + time)
As time advances, the sine wave pattern shifts in the wind direction, creating the illusion of wind sweeping across the field. With two wind sources at different angles, the waves interfere to create complex, naturalistic patterns.
Layered Displacement
The final displacement of each point combines several layers:
- Simplex noise — Organic, spatially-coherent randomness that scrolls in the wind direction
- Traveling sine waves — Parallel wavefronts for each wind source
- Per-line modifiers — Jitter (phase randomization), whisp (amplitude variation), and gust envelopes
Each layer operates at different spatial frequencies and serves a different purpose, but they all scroll or animate over time to create cohesive movement.
The Evolution: From Static to Organic
The animation started simple, just a field of lines. Each turn in the session increased the complexity by adding additional parameters and controls. Below is a timeline of how the animation evolved along with the prompt used for each step.
Stage 1: A Grid of Lines
The first version was trivially simple: parallel lines at a fixed angle, evenly spaced.
The LLM (opus 4-5) provided this initial skeleton including a canvas element, a render loop, and controls for the requested parameters.
Controls:
- Line Direction — The angle at which lines are drawn across the canvas
- Line Spacing — Distance between parallel lines (smaller = denser field)
- Line Width — Stroke thickness of each line
- Opacity — Transparency of the lines
- Line Color — The color of the lines
Stage 2: Wind and Movement
Wind is modeled as a sine wave traveling in a direction causing displacement of the points making up the field of lines, resulting in the illusion of movement. Points along the wind axis move together, creating the illusion of cohesive wind pushing through the field. Simplex noise adds organic variation—each point samples a 2D noise field, shifting perpendicular to the line direction.
Controls:
- Wind Speed — How fast the wave travels through the field
- Wind Direction — The direction the wave propagates
- Wind Size — Wavelength of the wind pattern (larger = broader, gentler curves)
- Jitter Amount — Randomizes each line’s wave phase (0 = lines move in sync, higher = lines move independently/chaotically)
- Jitter Diameter — Overall displacement magnitude (larger = points move further from origin)
Stage 3: Dual Wind Sources
Adding a second wind with independent direction and speed created interference patterns. The interaction between winds produces complex, naturalistic movement that neither wind creates alone.
New controls:
- Wind 2 Speed/Direction — A second independent wind source that combines with the first
- Wind Density — How many wave cycles fit in the viewport (higher = more turbulent appearance)
Stage 4: Depth Effect
Using the noise displacement value to modulate opacity: points that displace more appear “closer” and more opaque. This simple trick adds surprising depth to a 2D animation.
New control:
- Depth Effect — How strongly displacement affects opacity (0 = uniform opacity, 1 = maximum depth variation)
Stage 5: Whisp Effect
I left this prompt intentionally vague to see how the model would respond. First it attempted to add per-point turbulence, but this was too noisy. Eventually, we settled on this concept of “whisp”, which controls the intensity of wave amplitude on the displacement of clusters of lines.
Whisp as a control works well with jitter, but they are distinct controls. Jitter is a parameter that influences lines independently, and alters the line’s offset in relation to a wave. Whisp is a source of noise that moves slowly over the field and influences clusters of lines. As whisp noise moves over clusters of lines, it multiplies the amplitude of independent waves (i.e. wind) interacting with the same cluster of lines. This means whisp is an independent temporal noise source that moves through the field lines, independently of the two wind sources, allowing for temporary stronger wind effects.
New control:
- Whisp — Per-line variation in wind response (0 = all lines move uniformly, higher = some lines and their neighbors catch more wind than others)
Stage 6: Gusts
Wind speed sets the maximum intensity. Gust modulates how much of that maximum is active in different regions. Slow-moving spatial noise modulates each wind’s intensity. At low gust values, you see calm areas punctuated by gusts sweeping through.
New controls:
- Wind 1 Gust — Oscillation envelope for the first wind (10 = constant, 1 = mostly calm with occasional gusts)
- Wind 2 Gust — Oscillation envelope for the second wind (independent of Wind 1)
Prompting for Creative Code
What surprised me most was how well conversational iteration works for this kind of project. Each prompt built on the last, and the AI maintained context about what we’d built.
A few patterns that worked well:
Start vague, then refine: “Add a subtle animation” -> “Make it wave” -> “Add wind direction” -> “Two wind sources”
Incorporate feeling without controlling implementation: “Make it feel more whispy” would often lead to a better solution than “add turbulence to each point.”
Iterate via feedback: When the first whisp implementation felt wrong, describing why (“too choppy, no gradual movement”) helped find a better approach.
Use a sandbox: Adjusting parameters live and experimenting at the extreme ends of values is often more intuitive than reading the code alone.
The Sandbox: Playing with Parameters
I find the general recipe of build a sandbox first, then refine through iteration works surprisingly well across a broad range of problems, not just for creative coding and play like this animation exercise. The underlying principle, which echoes themes from REPL-driven development, is the faster your feedback loop, the more you can explore and sample from the solution space to find an ideal solution.
If you want to play with the sandbox I used to create this animation yourself, visit the animations sandbox and experiment. And if you build something interesting, I’d love to see it.
Future Directions
This line field animation was just a fun idea I had, but there are others I would like to explore:
Genetic Animations
What if parameters evolved over time based on fitness functions? Lines that “survive” based on aesthetic criteria, gradually evolving toward interesting configurations.
Layered Animations
Multiple animation layers with different parameters, composited with blend modes. A fast, fine-grained layer over a slow, broad layer could create rich depth.
Interactive Response
Animations that respond to mouse position or cursor movement. Wind that flows away from the cursor, or lines that orient toward it.
Graph-Based Visualizations
Instead of parallel lines, what about connected graphs? Nodes that drift with noise, edges that stretch and compress, creating organic network visualizations.
Audio-Reactive
Parameters modulated by audio input—bass driving wind speed, treble affecting jitter. The animation becomes a visualizer.
LLM’s open new dimensions
What started as an amusing simple background animation experiment quickly became an exploration of noise functions, wave interference, and the expressive power of parameterized systems.
I would never have spent the time learning and playing with traveling planar waves, simplex noise, or linear interpolation, if it weren’t for the ease by which LLM’s enable this form of play. Just as each parameter in the animation opens a dimension of variation, LLM’s open a dimension of infinite creativity and exploration.