Designing with AI
Reflections on how AI could transform product design and development
As 2023 draws to a close, almost every player in the tech industry is locked in a frenetic race to innovate and integrate AI technology. We’ve been familiar with machine learning for some time and Large Language Models (LLMs) are not exactly a novelty. However, ChatGPT opened the pandora box by showing us new ways in which humans can talk to machines. This advancement prompts us to question: What uncharted territories does this open for us as product makers? What roles and responsibilities must we embrace in navigating this new frontier?
In this article, I share a collection of insights gleaned from my recent explorations and experiments while designing product features with AI. These experiences have not only broadened my understanding but also highlighted the evolving role of designers in an AI-centric world. Before going deeper, let’s start with the basics.
Design is about connecting humans with technology
The history of humankind is a story of a species with inherent limitations ingeniously overcoming these barriers through the astute utilization of available resources. For example, the simple act of attaching a rock to a stick, transforming it into a formidable tool and weapon, ingeniously capitalizing on the human ability to throw, is what I call design.
As our resources have evolved to become more complex, so has technology. However, the human capacity to understand and effectively utilize this technology remains a critical bottleneck. The development of the Graphical User Interface (GUI) exemplifies this. Computing technology only significantly impacted society when user-friendly metaphors enabled people to apply pre-existing knowledge to new tools. The GUI revolutionized how we interact with computers, a breakthrough paralleled by LLMs (mainly ChatGPT) in its use of everyday language for human-machine communication.
Direct communication with machines unlocks a plethora of opportunities, but as we begin to explore these new possibilities, we quickly encounter a significant challenge: the inherent flaws in human language and communication. This dilemma highlights yet another limitation of our species. Often, we struggle to pinpoint exactly what we want, and even when we do, articulating it effectively can be a challenge. This raises a critical question about the future of user experience (UX): Is it prudent to base it largely on human communication, with all its imperfections? Perhaps the true revolution in AI will emerge from another perspective — one that might be more profound and impactful in its application.
Our way to make software is obsolete
A few lines above, I shared how design is about connecting humans with technology that becomes increasingly complex over time. This scenario has compelled the industry to develop more intricate and specialized methods for software development. Consequently, the need for sophistication in our approach to technology integration has become the defining paradigm of our current era.
Programming necessitates increasingly precise instructions and complex frameworks to scale, resulting in extended iteration cycles where products evolve and improve at a slower pace. This elongation of the development process is not only costly but also fraught with uncertainty.
From a design perspective, we attempt to mitigate the risks associated with this method of product development through testing and research as a way to close the cycle. Nonetheless, our ability to quickly adapt remains a crucial factor in distinguishing successful products from the rest. A review of the most successful digital products in recent years reveals that many have triumphed largely due to their agility in adapting and evolving. Here, ‘speed’ emerges as the key term.
AI could speed up the developing process
The recent introduction of Generative Pre-trained Transformers (GPTs) offers a glimpse into the potential future of software development. While still in its early stages, the trajectory is clear: a future where building and configuring systems through conversation is not just possible but commonplace. The true strength of LLMs lies not solely in their direct impact on end-users, but in the paradigm shift they bring — a transition from traditional algorithmic and programmatic instructions to conversational development.
This shift paves the way for a more interactive approach to software design, fostering greater synergy between systems and their applications. Such interactivity significantly lowers barriers for designers, allowing them to become more integral to the development process. They can directly interact with systems to construct parameters, boundaries, and constraints. This implies a new era of software development, where adjustments can be made swiftly in response to immediate feedback derived from data analysis — a domain where AI can further accelerate processes.
Ultimately, this evolution does not alter our fundamental role; we continue to bridge the gap between users with limited experience and complex systems. However, it does transform how we address problems in the backend, aligning software development more closely with the evolving needs and expectations of both designers and users.
The value of constraints
The integration of AI in delivering specific user-front solutions reveals a key insight: specificity is everything, and it is often achieved through well-defined constraints. This clarity in constraints enhances knowledge transfer, as seen in applications like Midjourney. In Midjourney, generating images from descriptions appears almost magical. However, the most refined results come from highly detailed prompts, where field knowledge injects specificity into the instructions. This is especially true for images mimicking reality.
Consider a photographer who is an expert in camera angles, lighting, and composition. Their intuitive mastery of these elements often translates into superior AI-generated images, outperforming those created by novices. Now, imagine this photographer launching a website devoted to cinematic photography. They could skillfully transform simple user requests, like ‘create a photo of a mountain landscape,’ into detailed prompts with tailored parameters and constraints. This approach ensures the best outcomes for specific styles, albeit with some limitation on user customization. However, this also simplifies the process for users who may lack technical expertise. It is a prime example of how effective communication of parameters and constraints, facilitated by LLMs, can significantly enhance the synergy between end users and technology, optimizing both the user experience and the final product.
In the final analysis, user needs are fundamentally consistent, anchored in the unchanging aspects of human nature. This persistence of human desires and challenges underscores a never-ending challenge in technology: reducing cognitive load. Users consistently seek the most efficient ways to achieve their goals, desiring solutions that deliver the greatest results with the least effort. This fundamental truth is likely to remain unchanged, even in a world increasingly driven by AI.
We are all responsible for what is coming
Reflecting on these possibilities, I’m inclined to think I have a clearer vision of the future. Yet, I’m quite aware that this perspective may soon become outdated as AI evolves rapidly. Future models might minimize cognitive load by themselves, streamlining user experiences. The task of designing technology to seamlessly interact with humans could even become fully automated. However, amidst these advancements, a striking realization emerges: our interaction with technology has always been one of using tools. As long as there is a role for humans, technology will continue to serve as a tool. Our actions and decisions, as professionals shaping this future, will play a decisive role in ensuring that humans remain a vital part of the technology equation, navigating this new landscape while keeping human relevance at the forefront.
Eder Rengifo is a Senior Product Designer at Automattic, currently working on Jetpack features powered by AI.