Facing the uncanny valley of generative AI

SeniorTechInfo
2 Min Read


Mental models and antipatterns

Mental models are an essential concept in UX and product design, but they need to be more readily embraced by the AI community. In our latest volume of the Thoughtworks Technology Radar, we delved into the importance of mental models in understanding AI systems.

We highlighted issues such as complacency with AI-generated code and replacing pair programming with generative AI as practices to avoid due to poor mental models that fail to acknowledge the technology’s limitations. As AI coding assistants become more convincing and “human-like,” it becomes challenging to understand the technology’s workings and the limitations of the solutions it provides.

Deploying generative AI into the world poses similar risks, with the potential to mislead or unsettle users. Legislation, like the EU AI Act, is being introduced to address these challenges by requiring deep fake creators to label content as “AI generated.”

It’s important to note that this issue extends beyond AI and robotics. Different platforms and contexts, each with its own set of assumptions and mental models, can affect user experience, as highlighted by Martin Fowler’s discussion on cross-platform mobile applications.

Shifting our perspective on generative AI’s limitations can lead to a better understanding of its capabilities. Ethan Mollick, a professor at the University of Pennsylvania, suggests viewing AI not as good software but as “pretty good people,” encouraging a reevaluation of our approach.


Shifting our perspective

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *