Product Design

PAiMo

Description: A visually striking 3D abstract artwork featuring a coral-colored spiral form with smooth, flowing curves and a soft pink gradient background, emphasizing modern digital aesthetics and organic geometry.

Timeline:

3 weeks

Status:

VisualForms Studio

My Role:

2025

Team:

Beauty

Featured Project Cover Image
Featured Project Cover Image
Featured Project Cover Image

Background

Background

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.

Defined Challenge

Defined Challenge

Unsatisfying Brainstorm Experience

In early user interviews, we identified the challenge within current brainstorming workflows.

Solution Vision

Solution Vision

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

It aims to empower teams to brainstorm and structure ideas faster inside FigJam through an AI-powered, board-first agent.

My Role

My Role

Solo Product Designer

Solo Product Designer

Solo Product Designer

As the only designer in team, I led the direction of the product, working closely with two other developers.

Final Outcome

Final Outcome

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

View Working Product ->

Following my design, the final outcome is officially in use, boosting brainstorming efficiency and streamlining how users generate and organize ideas.

How do we achieve this in 3 weeks?

Stage 1 / Define

Initial Idea

Initial Idea

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather & validate ideas. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas.

Existing product research

Existing product research

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

Define opportunity

Define opportunity

Tools built for collaboration lag behind in AI depth.

Building on that, we mapped these key products onto Radar Chart using five recurring features from the comparison analysis.

Our vision is then set to

Our vision is then set to

Our vision is then set to

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

Stage 2 / User research

user interview

4 1-on-1 Interviews and 3/4 think the process can be better

I conducted interviews with people in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process to pinpoint the pain points more precise.

User journey

User journey

Cynthia's story as a starting point

Cynthia's story as a starting point

Cynthia's story as a starting point

Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. I chose her user story as a starting point to frame and define our design solutions.

pain points (root causes)

pain points (root causes)

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

👷 Manual rework slows things down

👷 Manual rework slows things down

👷 Manual rework slows things down

Stage 3 / design decisions (Lo-fi)

Position matrix

Position matrix

An early guide for design decisions

An early guide for design decisions

An early guide for design decisions

While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness.

Design Challenge 1 / How to distinguish Ask & Act mode

Design Challenge 1 / How to distinguish Ask & Act mode

define pain points

define pain points

The first key design decision emerged when I realized there are moments when the agent:

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

So how can the product distinguish those moments to make the key interaction flow smooth?

Iteration 1

Iteration 1

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

Users had full control over which mode to enter, making the system logic explicit and predictable.

Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.

➡️

Switch focus back to Act, simplified the interaction.

Final decision

Final decision

Auto detection that prioritize Act Mode

✍️

Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.

💻

Built intent detection to clearly distinguish when an Act should be triggered.

Design Challenge 2 / How to provide action selection?

Design Challenge 2 / How to provide action selection?

define Problems

define Problems

Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.

Still, we must consider:

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

Secondary research

Secondary research

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

Iteration 1

Iteration 1

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

Quick and Intuitive next step selection

It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.

➡️

Recenter the process around user prompts, with the AI Agent taking the lead as intended.

Final decision

Final decision

5 functions strictly following academic research

✍️

Simplified the 7 functions down to 5

Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.

[Generate] [Clarify] [Refine] [Categorize] [Fill]

💻

We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.

Design Challenge 3 / How should the conversation scroll within the frame?

Design Challenge 3 / How should the conversation scroll within the frame?

Iteration 1

Iteration 1

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

User context visible, reading in sequence.

Long answers push FOLLOW-UP SECTION INVISIBLE.

➡️

Ensure follow-ups are visible on screen when the answer is generated.

Iteration 2

Iteration 2

Chat bubble segmented scroll

Chat bubble segmented scroll

Chat bubble segmented scroll

Follow-up section visible and easy to follow.

TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.

➡️

Simplified the interaction.

Final decision

Final decision

A push-up scroll design

✍️

A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.

Need to scroll back to read answers?

Need to scroll back to read answers?

Need to scroll back to read answers?

💻

We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.

We are not reverting to a typical LLM chat!

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Iteration 1

Iteration 1

Preview each function by hovering on them

Gives users contextual help directly.

The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.

➡️

Simplify & Separate tutorial to make it clearer.

Final decision

Final decision

A 3-steps micro tutorials

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

✍️

✍️

✍️

Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.

Action Feedback

Action Feedback

Action Feedback

Tailored progress feedback to reflect the agent’s specific actions.

Stage 4 / Design systems

Colors & typography

Colors & typography

A system passing all WCAG Contrast Ratio test ensuring readability & accessibility

reusable components

reusable components

A component library that ensures clear guidance in any situation

Each function is paired with custom guidance text, and I designed dedicated components to accurately represent them.

branding

branding

A FigPal style brain figure you could see anywhere

After Figma released FigPal in April 2025 and it quickly gained traction, I followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.

Stage 5 / Final Design (Prototype)

Act mode

Act mode

Our prioritized Act Mode / The agent takes initiative and puts the desired info on the board, with a follow-up section guide for further actions.

ask mode

ask mode

Our secondary Ask Mode / The agent responds to user questions and provides answers, paired with suggested actions to continue

Reflection

1️⃣

1️⃣

1️⃣

SimPlicity vs complexity

Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.

2️⃣

Design as Product Decisions

Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.

➡️

➡️

➡️

Iteration and Future Directions

Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.

Product Design

PAiMo

Description: A visually striking 3D abstract artwork featuring a coral-colored spiral form with smooth, flowing curves and a soft pink gradient background, emphasizing modern digital aesthetics and organic geometry.

Timeline:

3 weeks

Status:

VisualForms Studio

My Role:

2025

Team:

Beauty

Featured Project Cover Image
Featured Project Cover Image
Featured Project Cover Image

Background

Background

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.

Defined Challenge

Defined Challenge

Unsatisfying Brainstorm Experience

In early user interviews, we identified the challenge within current brainstorming workflows.

Solution Vision

Solution Vision

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

It aims to empower teams to brainstorm and structure ideas faster inside FigJam through an AI-powered, board-first agent.

My Role

My Role

Solo Product Designer

Solo Product Designer

Solo Product Designer

As the only designer in team, I led the direction of the product, working closely with two other developers.

Final Outcome

Final Outcome

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

View Working Product ->

Following my design, the final outcome is officially in use, boosting brainstorming efficiency and streamlining how users generate and organize ideas.

How do we achieve this in 3 weeks?

Stage 1 / Define

Initial Idea

Initial Idea

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather & validate ideas. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas.

Existing product research

Existing product research

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

Define opportunity

Define opportunity

Tools built for collaboration lag behind in AI depth.

Building on that, we mapped these key products onto Radar Chart using five recurring features from the comparison analysis.

Our vision is then set to

Our vision is then set to

Our vision is then set to

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

Stage 2 / User research

user interview

4 1-on-1 Interviews and 3/4 think the process can be better

I conducted interviews with people in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process to pinpoint the pain points more precise.

User journey

User journey

Cynthia's story as a starting point

Cynthia's story as a starting point

Cynthia's story as a starting point

Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. I chose her user story as a starting point to frame and define our design solutions.

pain points (root causes)

pain points (root causes)

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

👷 Manual rework slows things down

👷 Manual rework slows things down

👷 Manual rework slows things down

Stage 3 / design decisions (Lo-fi)

Position matrix

Position matrix

An early guide for design decisions

An early guide for design decisions

An early guide for design decisions

While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness.

Design Challenge 1 / How to distinguish Ask & Act mode

Design Challenge 1 / How to distinguish Ask & Act mode

define pain points

define pain points

The first key design decision emerged when I realized there are moments when the agent:

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

So how can the product distinguish those moments to make the key interaction flow smooth?

Iteration 1

Iteration 1

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

Users had full control over which mode to enter, making the system logic explicit and predictable.

Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.

➡️

Switch focus back to Act, simplified the interaction.

Final decision

Final decision

Auto detection that prioritize Act Mode

✍️

Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.

💻

Built intent detection to clearly distinguish when an Act should be triggered.

Design Challenge 2 / How to provide action selection?

Design Challenge 2 / How to provide action selection?

define Problems

define Problems

Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.

Still, we must consider:

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

Secondary research

Secondary research

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

Iteration 1

Iteration 1

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

Quick and Intuitive next step selection

It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.

➡️

Recenter the process around user prompts, with the AI Agent taking the lead as intended.

Final decision

Final decision

5 functions strictly following academic research

✍️

Simplified the 7 functions down to 5

Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.

[Generate] [Clarify] [Refine] [Categorize] [Fill]

💻

We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.

Design Challenge 3 / How should the conversation scroll within the frame?

Design Challenge 3 / How should the conversation scroll within the frame?

Iteration 1

Iteration 1

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

User context visible, reading in sequence.

Long answers push FOLLOW-UP SECTION INVISIBLE.

➡️

Ensure follow-ups are visible on screen when the answer is generated.

Iteration 2

Iteration 2

Chat bubble segmented scroll

Chat bubble segmented scroll

Chat bubble segmented scroll

Follow-up section visible and easy to follow.

TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.

➡️

Simplified the interaction.

Final decision

Final decision

A push-up scroll design

✍️

A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.

Need to scroll back to read answers?

Need to scroll back to read answers?

Need to scroll back to read answers?

💻

We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.

We are not reverting to a typical LLM chat!

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Iteration 1

Iteration 1

Preview each function by hovering on them

Gives users contextual help directly.

The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.

➡️

Simplify & Separate tutorial to make it clearer.

Final decision

Final decision

A 3-steps micro tutorials

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

✍️

✍️

✍️

Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.

Action Feedback

Action Feedback

Action Feedback

Tailored progress feedback to reflect the agent’s specific actions.

Stage 4 / Design systems

Colors & typography

Colors & typography

A system passing all WCAG Contrast Ratio test ensuring readability & accessibility

reusable components

reusable components

A component library that ensures clear guidance in any situation

Each function is paired with custom guidance text, and I designed dedicated components to accurately represent them.

branding

branding

A FigPal style brain figure you could see anywhere

After Figma released FigPal in April 2025 and it quickly gained traction, I followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.

Stage 5 / Final Design (Prototype)

Act mode

Act mode

Our prioritized Act Mode / The agent takes initiative and puts the desired info on the board, with a follow-up section guide for further actions.

ask mode

ask mode

Our secondary Ask Mode / The agent responds to user questions and provides answers, paired with suggested actions to continue

Reflection

1️⃣

1️⃣

1️⃣

SimPlicity vs complexity

Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.

2️⃣

Design as Product Decisions

Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.

➡️

➡️

➡️

Iteration and Future Directions

Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.

Product Design

PAiMo

Description: A visually striking 3D abstract artwork featuring a coral-colored spiral form with smooth, flowing curves and a soft pink gradient background, emphasizing modern digital aesthetics and organic geometry.

Timeline:

3 weeks

Status:

VisualForms Studio

My Role:

2025

Team:

Beauty

Featured Project Cover Image
Featured Project Cover Image
Featured Project Cover Image

Background

Background

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

3-weeks Microsoft AI Agent Hackathon

Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.

Defined Challenge

Defined Challenge

Unsatisfying Brainstorm Experience

In early user interviews, we identified the challenge within current brainstorming workflows.

Solution Vision

Solution Vision

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

All-in-one brainstorm in FigJam

It aims to empower teams to brainstorm and structure ideas faster inside FigJam through an AI-powered, board-first agent.

My Role

My Role

Solo Product Designer

Solo Product Designer

Solo Product Designer

As the only designer in team, I led the direction of the product, working closely with two other developers.

Final Outcome

Final Outcome

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

AI Agent for brainstorming & validating ideas

View Working Product ->

Following my design, the final outcome is officially in use, boosting brainstorming efficiency and streamlining how users generate and organize ideas.

How do we achieve this in 3 weeks?

Stage 1 / Define

Initial Idea

Initial Idea

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?

When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather & validate ideas. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas.

Existing product research

Existing product research

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

A Comparison Chart with pros and cons of 5 key products

Define opportunity

Define opportunity

Tools built for collaboration lag behind in AI depth.

Building on that, we mapped these key products onto Radar Chart using five recurring features from the comparison analysis.

Our vision is then set to

Our vision is then set to

Our vision is then set to

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

An agent that combines structured thinking, visual output,
and intelligent guidance, all in one place

Stage 2 / User research

user interview

4 1-on-1 Interviews and 3/4 think the process can be better

I conducted interviews with people in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process to pinpoint the pain points more precise.

User journey

User journey

Cynthia's story as a starting point

Cynthia's story as a starting point

Cynthia's story as a starting point

Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. I chose her user story as a starting point to frame and define our design solutions.

pain points (root causes)

pain points (root causes)

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🔧 LLM output lacks structure and usability

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

🧩 Fragmented tools break the workflow

👷 Manual rework slows things down

👷 Manual rework slows things down

👷 Manual rework slows things down

Stage 3 / design decisions (Lo-fi)

Position matrix

Position matrix

An early guide for design decisions

An early guide for design decisions

An early guide for design decisions

While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness.

Design Challenge 1 / How to distinguish Ask & Act mode

Design Challenge 1 / How to distinguish Ask & Act mode

define pain points

define pain points

The first key design decision emerged when I realized there are moments when the agent:

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

  1. Can’t detect the user’s intent.

  2. Users simply want to ask rather than take action.

So how can the product distinguish those moments to make the key interaction flow smooth?

Iteration 1

Iteration 1

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

A manual toggle between Ask and Act modes

Users had full control over which mode to enter, making the system logic explicit and predictable.

Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.

➡️

Switch focus back to Act, simplified the interaction.

Final decision

Final decision

Auto detection that prioritize Act Mode

✍️

Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.

💻

Built intent detection to clearly distinguish when an Act should be triggered.

Design Challenge 2 / How to provide action selection?

Design Challenge 2 / How to provide action selection?

define Problems

define Problems

Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.

Still, we must consider:

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

  1. Will offering predefined actions restrict user ideation unconsciously?

  2. If we move forward, which actions are most useful to include?

  3. And, how to design that

Secondary research

Secondary research

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Synthesized the results from tons of research papers with DeepSearch

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

Will offering predefined actions restrict user ideation unconsciously?

Q1

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

if we move forward, which actions are most useful to include?

Q2

Iteration 1

Iteration 1

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

7 functions that users can trigger with a single click

Quick and Intuitive next step selection

It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.

➡️

Recenter the process around user prompts, with the AI Agent taking the lead as intended.

Final decision

Final decision

5 functions strictly following academic research

✍️

Simplified the 7 functions down to 5

Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.

[Generate] [Clarify] [Refine] [Categorize] [Fill]

💻

We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.

Design Challenge 3 / How should the conversation scroll within the frame?

Design Challenge 3 / How should the conversation scroll within the frame?

Iteration 1

Iteration 1

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

GPT-style scroll, user prompt pinned to the top & generated response below

User context visible, reading in sequence.

Long answers push FOLLOW-UP SECTION INVISIBLE.

➡️

Ensure follow-ups are visible on screen when the answer is generated.

Iteration 2

Iteration 2

Chat bubble segmented scroll

Chat bubble segmented scroll

Chat bubble segmented scroll

Follow-up section visible and easy to follow.

TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.

➡️

Simplified the interaction.

Final decision

Final decision

A push-up scroll design

✍️

A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.

Need to scroll back to read answers?

Need to scroll back to read answers?

Need to scroll back to read answers?

💻

We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.

We are not reverting to a typical LLM chat!

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Design Challenge 4 / How to integrate tutorials to improve ease of use?

Iteration 1

Iteration 1

Preview each function by hovering on them

Gives users contextual help directly.

The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.

➡️

Simplify & Separate tutorial to make it clearer.

Final decision

Final decision

A 3-steps micro tutorials

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

✍️

✍️

✍️

Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.

Introduction & Overlay Tips

Introduction & Overlay Tips

Introduction & Overlay Tips

Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.

Action Feedback

Action Feedback

Action Feedback

Tailored progress feedback to reflect the agent’s specific actions.

Stage 4 / Design systems

Colors & typography

Colors & typography

A system passing all WCAG Contrast Ratio test ensuring readability & accessibility

reusable components

reusable components

A component library that ensures clear guidance in any situation

Each function is paired with custom guidance text, and I designed dedicated components to accurately represent them.

branding

branding

A FigPal style brain figure you could see anywhere

After Figma released FigPal in April 2025 and it quickly gained traction, I followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.

Stage 5 / Final Design (Prototype)

Act mode

Act mode

Our prioritized Act Mode / The agent takes initiative and puts the desired info on the board, with a follow-up section guide for further actions.

ask mode

ask mode

Our secondary Ask Mode / The agent responds to user questions and provides answers, paired with suggested actions to continue

Reflection

1️⃣

1️⃣

1️⃣

SimPlicity vs complexity

Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.

2️⃣

Design as Product Decisions

Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.

➡️

➡️

➡️

Iteration and Future Directions

Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.