There was quite some insightful AI-themed content at Figma Conference “Config 23”:
-
GPT-4 case study by Duolingo: “Designing with AI: building the flagship GPT-4 language product”: insights into the design process of an entirely new product (Duolingo Max) around GPT-4, offered from a perspective of non-engineers. One example of what they realized didn’t work in terms of continued user engangement: a plain chatbot where the user would need to initiate “a session”, like “ask me something”. Rather they found that a personalized setting, like with a slightly whacky language tutor, where the chatbot (so GPT-4) looks for mistakes and offers advice, works well.
- “AI and the future of design: Designing with AI”; aspects I found new/noteworty:
- idea: move from “atoms” up the stack to patters, letting AI do the monotonous work in Figma: Timestamp 7:05
- use prompts to chain plugins together: Timestamp 12:45. This seems different from Google I/O, where I got the impression that their tendency is to hide the prompts away (on mobile - so it’s not entirely an apples/apples comparison, admittedly). It’s interesting to see prompts to take hold as a UX paradigm.
- “AI and empowering creative careers”
- Talking point: “Commercially safe models” (Timestamp 08:02):
- Adobe Firefly not knowing who Spiderman and Mickey Mouse are, is a feature, not a bug.
- “Top” results returned should be “sorted by” / sampled by diversity
- (when talking about the Content Authenticity Initiative, Blockchain/Distributed Ledger was not a **talking point. This contrasts with e.g. Wacom)
- Talking point: “Commercially safe models” (Timestamp 08:02):
- Other mildly interesting big-picture style talk:
- “The crescendo of AI in our collective future”.
- “these large language models, when you have hundreds of billions of parameters, it’s turning computer science more into like a natural science” (Timestamp 3:29)
- “I think we’re kind of in Web 1.0 of AI, and there’s 2.0 coming. Where we are right now is with generative AI, and I think what’s next is kind of agentive AI, so systems that can actually take action in the world for us […] So then, that’s the question: like when does a language model ever push back on you and tell you this is not the right course of action? I think we should do this instead. And there’s a whole bunch of skills involved in reasoning, like knowing when to ask questions, clarify, playing out a scenario, and deciding, “Oh, this is not the right way to go. I should do this other thing instead.”” (Timestamp 4:25)
- “The crescendo of AI in our collective future”.
There is more, full playlist: Config 2023