A quick interactive visualisation

published

The original static visual was part of some concepting work our brand design team was working through in the early stages of a new visual brand exploration.

Considering this was one of many directions being explored and likely to evolve dramatically, the challenge was to see if I could create an interactive sketch as fast as they could do it statically. I also wanted to rely entirely on the LLM to produce all the code. So, resisting urges to jump in partway when I kinda got the gist. Instead I wanted to practice describing the changes I wanted to make. This meant I was asking a lot more about what certain parts were doing than I had previously. I did end up tweaking some values to make things animate at speeds and smoothness I liked more… Honestly, it was just way easier.

I didn’t have access to cursor at the time, unfortunatly. So I was getting gpt-30mini to print out the full revised code each iteration and then copying and pasting it into a code editor to see how it went. Tedious, but still fast enough

First steps - the requirements

I gave GPT-30-mini the visual reference, then described what I was after.

<role>
Front-end engineer

<task>
I want to protoype an interactive visulistation

It consists of blocks that are arranged in straight lines and then the straight lines are repeated around a circular circumference to form a circle. 


Each of the blocks is three0-dimensional shape and when the mouse hovers one they rotate 90 degrees on a single axis to reveal the immediate hidden under side which is colourful. The block directly under the mouse is the center point and the nearby neighbours also rotate in relation to their distance from the center point. i.e. thr center block is a full 90 degree the closest might be 70 the furthest 45. 

Each time the blocks return to their default state the hidden colour changes, so when they rotate back, they reveal a different colour. 


Please choose whatever you think the best technology is to produce this. It will likely only run locally, maybe vercel at the most, but can also live in an AWS S3 bucket if only HTML, CSS and JS are all that is needed


Attached are two visual references, one showing the default state and another with an example hover state.

I had a vague idea of how I thought I would achieve this, but I wanted to ask the model. And somewhat a pleasant surprise it chose three.js. Something that I had never played around with before, so I thought Yeah sure, YOLO.

Below is an outline of a straightforward approach using **Three.js** for a true 3D experience, though you could also achieve a similar effect using just HTML/CSS 3D transforms and JavaScript for the interactivity. Three.js will give you more flexibility if you later want to add lighting, shading, or more complex animations.

**Why Three.js?**

• **Built for 3D**: Three.js takes care of the 3D camera, perspective, and rendering details so you can focus on the visual logic.

• **Hover detection**: Three.js includes raycasting, which makes it easy to figure out which object the mouse is hovering over.

• **Easy to host**: It’s just JavaScript—upload to Vercel, AWS S3, or any static hosting and it’ll work.

It gave me some example code all in one HTML file. There was some initial back and forth about referencing the code library correctly, which involved correcting with an import map and setting up a local server to view it. When that was sorted, it was there… kinda.

It was better than expected, but the camera angles were off. I could rotate it around to get where I wanted it to be, though. But I couldn’t tell where it was starting from and what dimensions I changed to do that. So, without getting into XYZ rotation specifics, I gave it the image again and asked if it could correctly represent the image more accurately… Surprisingly, this worked.

Can you give me some code so the default state looks like the attached reference?

Now for some interactivity

The first attempt at the hover effect, the boxes were facing the wrong direction and spinning on hover the wrong way. Knowing that the models are not so good with maths and physics, I thought this might be a battle dealing in this 3D space. But with a few questions about which axis it was spinning them on and some trial error.

What axis do they currently rotate on?

I got it about right alongside also getting it to move neighbouring boxes in varying amounts relative to the origin hover point, so it was more fun to play with.

Can you widen the effect area so the hit area is bigger then the mouse pointer, like a radius around the mouse pointer as well as the area that affects the surrounding blocks so more blocks are affected

The timing and colour were off, though. So I asked if I could get it to show a different colour on top in the default state and another on the bottom. It could.

Is it possible to only have colour on the underside and keep the top side grey?

What I ended up with was a random colour assigned to the underside, so each time the block tilted over, it revealed a different colour. To add a little more, we also added a bit of a glow effect to the coloured side, just cause we were curious.

Now for the centre gradient

Pretty happy with the main interactive effect, I turned my attention to the centerpiece. It actually took a minute, and I never really got all the way there with it, either. But we did improve it. It had initially just rendered a flat circle; the original had a gradient sun-like look. I initially thought that just giving the source image again and asking to focus just on this would work. But no. I tried describing the effect, but no. I ended up asking how I can tweak it myself and then went about trying that, but I still couldn’t quite nail the darker inside to the fainter outer that blended seamlessly in the background. But for the sketch, I could get it too a place where it wasn’t a distraction and that was good enough for now. So I asked with probably my favourite instruction of the experiment. When I hit the center can trigger the hover effect that ripples towards the outside then snaps back like its breathing. A little tweaking but pretty easily picked up on this concept.

One more step, then back

Pretty happy with my interactive sketch I asked the model what else I could add to enhance the expereince one the suggestions was a particle / sparkle effect on the hover effect. I tried for minute having fun with asking particle generators but ultimatly wasn;t really producing the effect I had envisioned in my head when it first suggested it.

Wary I proably already sunk too much time into a direction that proably was not going to make past the first rounf I just went back to where it was a called it a day.

Main Steps in the Conversation

  1. Initial Concept and Requirements:
  • The conversation begins with a front-end engineer outlining the goal of creating an interactive visualization. The visualization consists of 3D blocks arranged in a circle, which rotate to reveal colorful undersides when hovered over.
  1. Technology Selection:
  • The decision is made to use Three.js for its robust 3D rendering capabilities. The conversation highlights the benefits of Three.js, such as handling 3D cameras, perspectives, and rendering details.
  1. Design and Implementation Details:
  • The chat includes detailed discussions on setting up the 3D scene, including initializing the THREE.Scene, THREE.PerspectiveCamera, and THREE.WebGLRenderer.

  • The engineers discuss creating the blocks using THREE.BoxGeometry and applying materials to achieve the desired visual effects.

  1. Interactive Features:
  • The conversation covers the implementation of hover detection using THREE.Raycaster to determine which block is under the mouse.

  • The logic for rotating blocks and changing colors is discussed, including how neighboring blocks should rotate proportionally based on their distance from the hovered block.

  1. Customization and Enhancements:
  • The chat includes questions and answers about customizing the visualization, such as adding more color stops to gradients and using RGBA values for transparency.

  • There are discussions on changing the background color and ensuring the edge of the circle blends with the background.

  1. Deployment Considerations:
  • The conversation touches on deployment options, emphasizing the ease of hosting the visualization on platforms like Vercel or AWS S3.
  1. Code Examples and Guidance:
  • Throughout the chat, code snippets and examples are provided to guide the implementation process. These include setting up the scene, handling hover interactions, and managing animations.

This outline captures the key stages of the conversation, highlighting the collaborative problem-solving and technical decisions that led to the successful creation of the interactive visualization

I’ve been sitting on this thought for a minute: Can I pull live CMS content straight into Figma and drive component design with it? Turns out, yes. This proof of concept does just that. Here’s where it stands.

The idea

Instead of stuffing static content into Figma, why not let a headless CMS handle it? Content creators get a structured environment, designers get more flexibility, and everyone skips the endless copy-pasting.

In theory, we are supposed to move from low to high fidelity, locking things down in progressive stages that don’t require us to look back. In practice, content, structure, and final output are often developed in silos—leading to bottlenecks when everything finally collides. This experiment aims to bring these pieces together earlier, adding more context to decisions while still keeping workflows modular.

But let’s not get ahead of ourselves—first, the technical proof of concept.


Setting up the Figma plugin environment

First, I needed a Figma plugin. In Figma, go to “Plugins > Development > New Plugin…” and follow the prompts. I set up a “Figma Design” plugin with ‘Custom UI’ enabled, making it accessible under “Plugins > Development.”

You get three key files saveed to you local drive:

  • manifest.json – Defines how Figma interacts with the plugin
  • code.ts – The core logic
  • ui.html – The plugin interface

Setting up Prismic

On the Prismic side, I set up a simple structured content model for text and images to test dynamic population inside Figma.

Why Prismic?

No real reason—I hadn’t used it before, and this seemed like a good excuse. Turns out, it’s a simple but highly configurable CMS.

How Prismic?

Prismic’s approach to structured content aligns with atomic design. You build small elements, compose them into components, and stack those into larger layouts. In Prismic, these are called slices, referencing content blocks on a page.

There are two main tools:

  • Custom Types Editor – Define reusable content components (slices) with structured fields.
  • Page Builder – Drag-and-drop slices, reorder them, enter content, publish.

I skipped manual setup and used a blog post template from Prismic’s repo to predefine slices.

To access content via API, I generated an access token in “API & Security.”


The CORS issue

So, just pull the API content into Figma, right? Nope.

First attempt: ping the API inside Figma. Immediate CORS error. Figma runs as a restricted web browser, meaning all requests have origin: 'null', which most APIs block. A proxy was needed.

I started with this prompt in Cursor:

I need to set up an API proxy on Vercel. The proxy will be used by a Figma plugin running in a restricted web browser, which sends requests with origin: 'null'. The proxy must accept these requests and relay data from the Prismic API.

After some trial and error, I ended up with this:

import * as prismic from "@prismicio/client";
export default async function handler(req, res) {
  res.setHeader("Access-Control-Allow-Origin", "*");
  res.setHeader("Access-Control-Allow-Methods", "GET, OPTIONS");
  res.setHeader("Access-Control-Allow-Headers", "Content-Type, Authorization");

  if (req.method === "OPTIONS") {
    return res.status(200).end();
  }

  try {
    const client = prismic.createClient("figma-page-build-test", {
      accessToken: "MC5aNXdBZkJJQU...",
    });

    const documents = await client.get();
    return res.status(200).json(documents);
  } catch (error) {
    return res.status(500).json({ error: error.message });
  }
}

Deployed to Vercel, this proxy finally let me pull CMS data into Figma.


Building the component library

With content flowing in, I created master Figma components that matched the Prismic slices, using consistent layer names for easier mapping.

To reference these in code, I needed their component IDs. The Node Inspector plugin helped with that.

The components

Prepare to be severely underwhelmed


Diving into the logic

Here’s the basic logic:

  1. Fetch content from the proxy
  2. Define what to look for and how to interpret it
  3. Parse the API response
  4. Loop through content and match it to the right Figma components
  5. Populate the page in the correct order

The code.ts

One early hurdle: defining fonts and variants before inserting text elements. Hardcoding worked but didn’t sit right, considering it killed the flexibility needed if this is ever anything more than this experiment.

This was starting to bring up an interesting question. Where should the definitions be controlled from? In Prismic? Should Figma handle them? Should Figma variables be dictating?

The workaround was dynamically loading fonts based on the Figma text nodes:

async function loadFontsForTextNode(textNode: TextNode) {
  if (textNode.fontName === figma.mixed) {
    const uniqueFonts = new Set<string>();
    for (let i = 0; i < textNode.characters.length; i++) {
      const font = textNode.getRangeFontName(i, i + 1) as FontName;
      const fontKey = `${font.family}-${font.style}`;
      if (!uniqueFonts.has(fontKey)) {
        uniqueFonts.add(fontKey);
        await figma.loadFontAsync(font);
      }
    }
  } else {
    await figma.loadFontAsync(textNode.fontName as FontName);
  }
}

This ensures the correct font is always applied—no need to define it anywhere.


Backlog: What’s next?

A list of ideas to make this more flexible and useful:

  • Support more formats and components (Figma + Prismic)
  • Multi-page support
  • Selectable API content feeds
  • Multiple component libraries
  • Content update notifications
  • Inline content updates within existing components
  • Version history (undo/redo for content changes)
  • Modularising the codebase
  • Better loading states and UI feedback
  • Automating Figma-to-Prismic slice conversion

Resources & code

  • Github – contains the full code for the plugin and proxy
  • Figma component library – A reference set of components structured to work seamlessly with the plugin - prepare to be severely underwhelmed

Final thoughts

This proof of concept shows that a Figma plugin can handle live CMS data efficiently. The goal? A content-first workflow that bridges content and design dynamically—without unnecessary friction.

The elephant in the room

Why not just go straight to code—or even a no-code platform? That’s probably the logical future state. But for now, the collaborative, exploratory nature of design work still matters. The ability to fluidly switch between content, code, and visual environments—while keeping everything in sync in real time—is the real goal. That’s the dream.