Product Design
Dayout
Redesigned the "ShopEase" e-commerce app to enhance user experience. Focused on simplifying navigation, optimizing the checkout process, and incorporating a sleek.
Timeline:
3 weeks
Status:
ShopEase
My Role:
2025
Team:
E-commerce



Problem :
Background
Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.
My Role
As the only designer in team, I led the direction of the product, conduct the end-to-end product design while working closely with two other developers.

Product Features
Within this limited timeframe, our team delivered an AI-powered Figma plugin — FIGBRAIN — designed to streamline brainstorming by integrating structured AI support directly into the canvas. It highlighted the following features:
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
Final Product & Impact
Following my design, the final outcome was launched as an official Figma plugin—boosting brainstorming efficiency by XX% and streamlining how users generate and organize ideas.
How do we achieve this in 3 weeks?
Stage 1 - Define
Initial Idea :
When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather ideas — yet this has only made brainstorming more fragmented. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas. So we asked ourselves:
What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?
Existing Product Research :
Pros & Cons
We quickly began exploring existing brainstorming tools and created a COMPARISON CHART highlighting the pros and cons of five key products.



Our Opportunity
Building on that, we mapped these key products onto RADAR CHART using five recurring features from the comparison analysis.
The radar chart confirmed our hunch: tools built for collaboration lag behind in AI depth.
We saw an opportunity to:
design an agent that combines structured thinking, visual output, and intelligent guidance—all in one place.




INterview :
To validate our assumptions further and uncover pain points in current brainstorming workflows, we conducted 1-on-1 interviews with four users in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process.
We began the interviews with two core questions:
What does your current brainstorming process look like?
How do large language models (LLMs) fit into that process—if at all?
User Journey
Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. We chose her user story as a starting point to frame and define our design solutions.



Pain Points (Root Causes)
Based on the insights and user journey, we have distilled three main pain points.
LLM output lacks structure and usability
LLMs generate long, unfocused replies not suited for brainstorming.
Users struggle to extract actionable ideas from verbose output.
Prompting feels like trial-and-error without clear guidance.
Fragmented tools break the workflow
Brainstorming happens across ChatGPT, Notion, Figma, PDFs, and whiteboards.
No single flow to collect, structure, or track insights.
Offline and team collaboration don’t connect smoothly with AI tools.
Manual rework slows things down
AI output doesn’t match team formats or project frameworks.
Users must reformat, rewrite, and adapt content by hand.
This reduces efficiency and limits the value of AI acceleration.
Stage 2 - Design Decisions (low-fi)
Position matrix
While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness. To support this, we created a Position Matrix early on to guide our design decisions.



Design Challenge 1 : How to distinguish ASK & Act MOde
Define
The first key design decision emerged when we realized there are moments when the agent:
can’t detect the user’s intent.
users simply want to ask rather than take action.
So how can we distinguish those moments to make the key interaction flow smooth?
Iteration 1
We started with a manual toggle between Ask and Act modes to give users control
✅
✅
✅
Users had full control over which mode to enter, making the system logic explicit and predictable.
❌
Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.
➡️
Switch focus back to Act, simplified the interaction.


Final Decision
✍️
Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.
💻
Built intent detection to clearly distinguish when an Act should be triggered.
Design Challenge 2 : How to provide Action selection?
Define
Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.
Still, we must consider:
Will offering predefined actions restrict user ideation unconsciously?
if we move forward, which actions are most useful to include?
And, how to design that
Research
To identify the first two question, we ran through tons of research papers with DeepSearch and find the following evidence:
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
Iteration 1
Based on the structured flow of brainstorming with AI agent, we designed seven functions that users can trigger with a single click.
✅
✅
✅
Quick and Intuitive next step selection
❌
It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.
➡️
Recenter the process around user prompts, with the AI Agent taking the lead as intended.
Final Decision
✍️
Simplified the 7 functions down to 5
Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.
[Generate] [Clarify] [Refine] [Categorize] [Fill]
💻
We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.
Design Challenge 3 : How should the conversation scroll within the frame?
Iteration 1
The first version we tested followed a GPT-style scroll, where the user prompt is pinned to the top and the generated response appears below.
✅
✅
✅
User context visible, reading in sequence.
❌
Long answers push FOLLOW-UP SECTION INVISIBLE.
➡️
Ensure follow-ups are visible on screen when the answer is generated.
Iteration 2
To make follow-up section always visible, we tested the chat bubble segmented scroll.
✅
✅
✅
Follow-up section visible and easy to follow.
❌
TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.
➡️
Simplified the interaction.
Final Decision
✍️
✍️
✍️
A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.
need to scroll back to read answers?
💻
We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.
We are not reverting to a typical LLM chat!
Design Challenge 4 : How to integrate tutorials to improve ease of use?
Iteration 1
The first version used hover-based tutorials, allowing users to preview each function through a short animation.
✅
✅
✅
Gives users contextual help directly.
❌
The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.
➡️
Simplify & Separate tutorial to make it clearer.
Final Decision


Introduction & Overlay Tips
✍️
✍️
✍️
Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.


Input Boxes Hint
Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.


Action Feedback
Tailored progress feedback to reflect the agent’s specific actions.
Stage 3 - Design SYstems
Colors & Typography :



Components :
Reusable Components
To ensure a smooth experience, each function is paired with custom guidance text, and we designed dedicated components to accurately represent them in the final product.
Branding
Since this is a Figma plugin, we wanted it to feel native to the Figma environment. After Figma released FigPal in April 2025 and it quickly gained traction, we followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.










Stage 4 - Final DESIGN (PROTOTYPE)
Act Mode with follow-up section guide :
ASk mode with follow-up Action :
reflection
1️⃣
1️⃣
1️⃣
SimPlicity vs complexity
Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.
2️⃣
Design as Product Decisions
Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.
➡️
➡️
➡️
Iteration and Future Directions
Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.
More Projects
Product Design
Dayout
Redesigned the "ShopEase" e-commerce app to enhance user experience. Focused on simplifying navigation, optimizing the checkout process, and incorporating a sleek.
Timeline:
3 weeks
Status:
ShopEase
My Role:
2025
Team:
E-commerce



Problem :
Background
Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.
My Role
As the only designer in team, I led the direction of the product, conduct the end-to-end product design while working closely with two other developers.

Product Features
Within this limited timeframe, our team delivered an AI-powered Figma plugin — FIGBRAIN — designed to streamline brainstorming by integrating structured AI support directly into the canvas. It highlighted the following features:
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
Final Product & Impact
Following my design, the final outcome was launched as an official Figma plugin—boosting brainstorming efficiency by XX% and streamlining how users generate and organize ideas.
How do we achieve this in 3 weeks?
Stage 1 - Define
Initial Idea :
When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather ideas — yet this has only made brainstorming more fragmented. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas. So we asked ourselves:
What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?
Existing Product Research :
Pros & Cons
We quickly began exploring existing brainstorming tools and created a COMPARISON CHART highlighting the pros and cons of five key products.



Our Opportunity
Building on that, we mapped these key products onto RADAR CHART using five recurring features from the comparison analysis.
The radar chart confirmed our hunch: tools built for collaboration lag behind in AI depth.
We saw an opportunity to:
design an agent that combines structured thinking, visual output, and intelligent guidance—all in one place.




INterview :
To validate our assumptions further and uncover pain points in current brainstorming workflows, we conducted 1-on-1 interviews with four users in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process.
We began the interviews with two core questions:
What does your current brainstorming process look like?
How do large language models (LLMs) fit into that process—if at all?
User Journey
Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. We chose her user story as a starting point to frame and define our design solutions.



Pain Points (Root Causes)
Based on the insights and user journey, we have distilled three main pain points.
LLM output lacks structure and usability
LLMs generate long, unfocused replies not suited for brainstorming.
Users struggle to extract actionable ideas from verbose output.
Prompting feels like trial-and-error without clear guidance.
Fragmented tools break the workflow
Brainstorming happens across ChatGPT, Notion, Figma, PDFs, and whiteboards.
No single flow to collect, structure, or track insights.
Offline and team collaboration don’t connect smoothly with AI tools.
Manual rework slows things down
AI output doesn’t match team formats or project frameworks.
Users must reformat, rewrite, and adapt content by hand.
This reduces efficiency and limits the value of AI acceleration.
Stage 2 - Design Decisions (low-fi)
Position matrix
While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness. To support this, we created a Position Matrix early on to guide our design decisions.



Design Challenge 1 : How to distinguish ASK & Act MOde
Define
The first key design decision emerged when we realized there are moments when the agent:
can’t detect the user’s intent.
users simply want to ask rather than take action.
So how can we distinguish those moments to make the key interaction flow smooth?
Iteration 1
We started with a manual toggle between Ask and Act modes to give users control
✅
✅
✅
Users had full control over which mode to enter, making the system logic explicit and predictable.
❌
Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.
➡️
Switch focus back to Act, simplified the interaction.


Final Decision
✍️
Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.
💻
Built intent detection to clearly distinguish when an Act should be triggered.
Design Challenge 2 : How to provide Action selection?
Define
Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.
Still, we must consider:
Will offering predefined actions restrict user ideation unconsciously?
if we move forward, which actions are most useful to include?
And, how to design that
Research
To identify the first two question, we ran through tons of research papers with DeepSearch and find the following evidence:
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
Iteration 1
Based on the structured flow of brainstorming with AI agent, we designed seven functions that users can trigger with a single click.
✅
✅
✅
Quick and Intuitive next step selection
❌
It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.
➡️
Recenter the process around user prompts, with the AI Agent taking the lead as intended.
Final Decision
✍️
Simplified the 7 functions down to 5
Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.
[Generate] [Clarify] [Refine] [Categorize] [Fill]
💻
We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.
Design Challenge 3 : How should the conversation scroll within the frame?
Iteration 1
The first version we tested followed a GPT-style scroll, where the user prompt is pinned to the top and the generated response appears below.
✅
✅
✅
User context visible, reading in sequence.
❌
Long answers push FOLLOW-UP SECTION INVISIBLE.
➡️
Ensure follow-ups are visible on screen when the answer is generated.
Iteration 2
To make follow-up section always visible, we tested the chat bubble segmented scroll.
✅
✅
✅
Follow-up section visible and easy to follow.
❌
TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.
➡️
Simplified the interaction.
Final Decision
✍️
✍️
✍️
A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.
need to scroll back to read answers?
💻
We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.
We are not reverting to a typical LLM chat!
Design Challenge 4 : How to integrate tutorials to improve ease of use?
Iteration 1
The first version used hover-based tutorials, allowing users to preview each function through a short animation.
✅
✅
✅
Gives users contextual help directly.
❌
The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.
➡️
Simplify & Separate tutorial to make it clearer.
Final Decision


Introduction & Overlay Tips
✍️
✍️
✍️
Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.


Input Boxes Hint
Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.


Action Feedback
Tailored progress feedback to reflect the agent’s specific actions.
Stage 3 - Design SYstems
Colors & Typography :



Components :
Reusable Components
To ensure a smooth experience, each function is paired with custom guidance text, and we designed dedicated components to accurately represent them in the final product.
Branding
Since this is a Figma plugin, we wanted it to feel native to the Figma environment. After Figma released FigPal in April 2025 and it quickly gained traction, we followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.










Stage 4 - Final DESIGN (PROTOTYPE)
Act Mode with follow-up section guide :
ASk mode with follow-up Action :
reflection
1️⃣
1️⃣
1️⃣
SimPlicity vs complexity
Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.
2️⃣
Design as Product Decisions
Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.
➡️
➡️
➡️
Iteration and Future Directions
Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.
More Projects
Product Design
Dayout
Redesigned the "ShopEase" e-commerce app to enhance user experience. Focused on simplifying navigation, optimizing the checkout process, and incorporating a sleek.
Timeline:
3 weeks
Status:
ShopEase
My Role:
2025
Team:
E-commerce



Problem :
Background
Our team joined the AI Agents Hackathon held by Microsoft this Spring and were faced the challenge of using creativity to build an AI Agent in 3 weeks.
My Role
As the only designer in team, I led the direction of the product, conduct the end-to-end product design while working closely with two other developers.

Product Features
Within this limited timeframe, our team delivered an AI-powered Figma plugin — FIGBRAIN — designed to streamline brainstorming by integrating structured AI support directly into the canvas. It highlighted the following features:
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
1.
Provide structured ideas w/ visual layout
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
2.
Act Mode w/ Instant AI Actions
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
3.
Ask Mode w/ Smart Follow-ups
Final Product & Impact
Following my design, the final outcome was launched as an official Figma plugin—boosting brainstorming efficiency by XX% and streamlining how users generate and organize ideas.
How do we achieve this in 3 weeks?
Stage 1 - Define
Initial Idea :
When deciding what to build for the hackathon, we were drawn to our own brainstorming pain points. With the rise of AI tools, people increasingly rely on them to gather ideas — yet this has only made brainstorming more fragmented. Users often jump between ChatGPT, Google, and FigJam before manually organizing everything into a structured canvas. So we asked ourselves:
What if an AI agent could help users layout and structure ideas directly in Figma (FIGJAM)?
Existing Product Research :
Pros & Cons
We quickly began exploring existing brainstorming tools and created a COMPARISON CHART highlighting the pros and cons of five key products.



Our Opportunity
Building on that, we mapped these key products onto RADAR CHART using five recurring features from the comparison analysis.
The radar chart confirmed our hunch: tools built for collaboration lag behind in AI depth.
We saw an opportunity to:
design an agent that combines structured thinking, visual output, and intelligent guidance—all in one place.




INterview :
To validate our assumptions further and uncover pain points in current brainstorming workflows, we conducted 1-on-1 interviews with four users in different industries who regularly use tools like FigJam, Notion, and ChatGPT in their ideation process.
We began the interviews with two core questions:
What does your current brainstorming process look like?
How do large language models (LLMs) fit into that process—if at all?
User Journey
Among the four interviewees, Cynthia had the most hands-on experience with brainstorming tools and frequently used various existing products. We chose her user story as a starting point to frame and define our design solutions.



Pain Points (Root Causes)
Based on the insights and user journey, we have distilled three main pain points.
LLM output lacks structure and usability
LLMs generate long, unfocused replies not suited for brainstorming.
Users struggle to extract actionable ideas from verbose output.
Prompting feels like trial-and-error without clear guidance.
Fragmented tools break the workflow
Brainstorming happens across ChatGPT, Notion, Figma, PDFs, and whiteboards.
No single flow to collect, structure, or track insights.
Offline and team collaboration don’t connect smoothly with AI tools.
Manual rework slows things down
AI output doesn’t match team formats or project frameworks.
Users must reformat, rewrite, and adapt content by hand.
This reduces efficiency and limits the value of AI acceleration.
Stage 2 - Design Decisions (low-fi)
Position matrix
While the AI agent enables a non-linear and powerful workflow, its complexity requires us to simplify the user flow to ensure intuitiveness. To support this, we created a Position Matrix early on to guide our design decisions.



Design Challenge 1 : How to distinguish ASK & Act MOde
Define
The first key design decision emerged when we realized there are moments when the agent:
can’t detect the user’s intent.
users simply want to ask rather than take action.
So how can we distinguish those moments to make the key interaction flow smooth?
Iteration 1
We started with a manual toggle between Ask and Act modes to give users control
✅
✅
✅
Users had full control over which mode to enter, making the system logic explicit and predictable.
❌
Increased cognitive load and disrupted the flow, risking a FALLBACK TO a typical LLM EXPERIENCE.
➡️
Switch focus back to Act, simplified the interaction.


Final Decision
✍️
Shifted to automatic mode detection, where the agent acts when possible and defaults to Ask Mode with follow-up prompts when not.
💻
Built intent detection to clearly distinguish when an Act should be triggered.
Design Challenge 2 : How to provide Action selection?
Define
Although users can interact freely with our agent, incorporating an action selection bar can help guide the brainstorming process and allow us to tailor prompt engineering for better outcomes.
Still, we must consider:
Will offering predefined actions restrict user ideation unconsciously?
if we move forward, which actions are most useful to include?
And, how to design that
Research
To identify the first two question, we ran through tons of research papers with DeepSearch and find the following evidence:
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
01.
No, if done right. In fact, it can enhance creativity and output.
Each technique produced similar originality scores, and using several in sequence did not lead to idea exhaustion.” -- Ritter, S. M., & Mostert, N. (2018). Creative Industries Journal
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
02.
Generate, Clarify, Refine, Categorize Ideas
how humans naturally engage with AI agents during ideation: to generate, clarify, refine, and categorize ideas. -- Muller et al.(2024)
Iteration 1
Based on the structured flow of brainstorming with AI agent, we designed seven functions that users can trigger with a single click.
✅
✅
✅
Quick and Intuitive next step selection
❌
It’s more of a USER-DRIVEN FLOW than an agent. This risks shifting effort back to users and narrowing outputs.
➡️
Recenter the process around user prompts, with the AI Agent taking the lead as intended.
Final Decision
✍️
Simplified the 7 functions down to 5
Closely aligned with the AI Agent’s brainstorming flow to guide users through the process.
[Generate] [Clarify] [Refine] [Categorize] [Fill]
💻
We give special attention to other frequently used function (Summarize, Extend etc.) on the back end through dedicated prompt engineering.
Design Challenge 3 : How should the conversation scroll within the frame?
Iteration 1
The first version we tested followed a GPT-style scroll, where the user prompt is pinned to the top and the generated response appears below.
✅
✅
✅
User context visible, reading in sequence.
❌
Long answers push FOLLOW-UP SECTION INVISIBLE.
➡️
Ensure follow-ups are visible on screen when the answer is generated.
Iteration 2
To make follow-up section always visible, we tested the chat bubble segmented scroll.
✅
✅
✅
Follow-up section visible and easy to follow.
❌
TWO SCROLLABLE AREAS in a small plugin interface increases the difficulty of precise user interactions.
➡️
Simplified the interaction.
Final Decision
✍️
✍️
✍️
A push-up scroll design to keep follow-ups visible and simplify interaction within the limited plugin space.
need to scroll back to read answers?
💻
We prompt-engineered the ask mode to keep answers short and focused, encouraging users to use follow-ups for more detailed, structured, on-board action.
We are not reverting to a typical LLM chat!
Design Challenge 4 : How to integrate tutorials to improve ease of use?
Iteration 1
The first version used hover-based tutorials, allowing users to preview each function through a short animation.
✅
✅
✅
Gives users contextual help directly.
❌
The PLUGIN SURFACE TOO LIMITED, making hover animations unclear and hard to follow.
➡️
Simplify & Separate tutorial to make it clearer.
Final Decision


Introduction & Overlay Tips
✍️
✍️
✍️
Emphasizing that Act Mode is a core part of the system and is prioritized when applicable.


Input Boxes Hint
Added input hints in a “You do this — Agent does that” format to guide users and reduce uncertainty.


Action Feedback
Tailored progress feedback to reflect the agent’s specific actions.
Stage 3 - Design SYstems
Colors & Typography :



Components :
Reusable Components
To ensure a smooth experience, each function is paired with custom guidance text, and we designed dedicated components to accurately represent them in the final product.
Branding
Since this is a Figma plugin, we wanted it to feel native to the Figma environment. After Figma released FigPal in April 2025 and it quickly gained traction, we followed the trend and created our own figure — FigBrain. You will see it everywhere from conversation bubbles to guide.










Stage 4 - Final DESIGN (PROTOTYPE)
Act Mode with follow-up section guide :
ASk mode with follow-up Action :
reflection
1️⃣
1️⃣
1️⃣
SimPlicity vs complexity
Interactions with AI feel incredibly simple due to its inherent power, but this same capability means that what appears seamless to users actually demands careful consideration around interaction flow, clarity, and underlying system behaviors.
2️⃣
Design as Product Decisions
Serving as both designer and PM in a small team made me deeply aware that every UX decision directly shapes the product’s trajectory and long-term vision—even subtle choices, such as including an Ask/Act mode toggle, can significantly impact user experience.
➡️
➡️
➡️
Iteration and Future Directions
Due to limited timeframe, we haven’t yet achieved a fully polished product, but I'm excited to continue refining it through real user feedback, ultimately moving closer to an intuitive and truly intelligent experience.