Humans VS AI: Who’s Better at Designing?
Some may think that the essence of a designer’s work is to create beautiful pictures. However, our philosophy states that a designer creates user experience. Moreover, not all designers are even supposed to draw well. Alexey Kulakov, JetStyle’s co-founder and CEO, shares his vision on what design is, what the meaning of their work is, and whether digitalization can replace their functions completely:
I could write a checklist of a designer’s core hard skills that would matter most in the era of AI – but it would be unprofessional of me. The industry is constantly evolving, and it influences the profession of a designer as well. That’s why in this article I cover the aspects of a designer’s activity that are most influenced by the tech innovations.
Why would you want to listen to what I say? I used to work as a designer and was one of the first three web designers in my home city. Now I’m the CEO of a digital production studio, and I manage a team of designers. We work in the areas of product development, AR/VR/MR and UX/UI. Designer competence is a core one for all of them.
What Designers Do & How Automation Changes Their Work
A designer’s work is to change people's behavior. Designers create user experience rather than just how the product looks.
When users experience positive change in their behavior and feel the value the product delivers, it’s a win-win, as it also leads to the company getting more profits.
I’ve outlined 10 steps of a design process. All ten are not mandatory for each project, but here is a logical sequence of how it goes:
- Briefing your client
- Conducting research
- Designing an experience
- Developing prototypes
- Testing the experience
- Developing a style
- Engineering a layout
- Conducting design supervision
- Analyzing performance
- Ensuring interface development.
Let’s look at each of the tasks in closer detail and see whether humans or robots are better at completing those.
I’d also like to point out that automation is an extremely rapid process. Robots are increasingly taking over more and more routine tasks.
It makes sense to delegate these tasks to AI and search for areas where humans are less likely to be replaced. Another idea for development is to learn to collaborate with robots.
Disclaimer to add to the context of this article: a designer here is not the person who creates the graphic user interface, but the one who develops target experience.
1. Briefing your client
The first thing one does with a new project is figuring out the point of the design: what business goal we are working towards and what value we want to create.
Then we need to find out what user experience we need to design to achieve the goal. Detailed context is necessary here: we have to describe the environment the end users act in. After that we search for available tools (low-code, ML, front end development, etc) that can help solve the task. With everything taken into account, we propose options to solve the problem and the vision of what the future product may look like.
Let’s talk about Midjourney, Stable Diffusion and DALL-E. Several R&D teams created diffusion neural networks that can turn a text input into an image. They implemented this feature into innovative products – thus creating a brand new experience of interaction between robots and humans. Each of the teams envisioned their own understanding of the goal and its context, and developed their own product. Stable Diffusion and DALL-E created a graphic interface, and Midjourney created a Discord chat bot experience.
Which of the options mentioned above turned out to be the biggest success? To cut a long story short, I guess Midjourney managed to be the most practical one; with a more user-friendly interface, the tool provides an effective P2P learning mode. The moral of the story: your choice of tools for solving a task predetermines the design of the future product.
How do designers cope with this task?
Most of the time, they are not great with it. Usually, it’s the managers who brief the clients and then pass the task over to designers. Because of this, all the decisions that influence the design are made without designers.
If a designer knows how to brief the client, then they are more relevant and valuable to the team.
What about robots?
Describing the task is an analytical job. It requires empathy, responsibility and the right to make decisions. Humans are always going to be better at this.
Robots, however, can be helpful. We can use tools that transcribe voice recordings into texts (Noty.ai, Otter.ai), and services that summarize the transcript (e.g. Notion AI).
Also, AI can simplify communication: Voilà Chrome add-on uses ChatGPT to help write business letters.
2. Conducting research
Before we design a new experience, we should pay attention to the current one. In what situations do people use a similar product? What motivates them to look for better alternatives? To research people’s behavior, we implement analytical tools and talk to them a lot.
Designers need to outline the right segments of the target audience among all users and notice their behavior patterns. Also, it’s crucial to research best practices – their features, design, reviews, early prototypes, academic papers.
How do designers cope with this task?
Deep research is quite a rare phenomenon among professionals. They usually take the first few steps – such as researching publicly available prototypes - without going too deep.
A specialist capable of conducting research can find an area that does not yet have a best practice.
What about robots?
As for now, there’s no such service that will conduct a full-scale research for you, but you can use AI-based tools that will cover separate tasks. They will find:
- collections of interface patterns;
- current best practices;
- research dashboards that gather users you can talk to and learn about their experiences;
- methods for finding insights from data;
- data on how people's behavior changed depending on different factors, etc.
The Internet gives out a lot of information about user behavior. The bigger the company is, the more automated tools it will use to find this information. However, designers will still need their empathy, background knowledge and ability to process information.
3. Designing an experience
Designing an experience is creating user scenarios, and this is designers’ core job. When they create designs, they don’t pay as much attention to how the technical system behaves. Their primary focus is what users do when they interact with that technical system.
I would also like to place special emphasis on the content of the screens. It’s a common mistake that designers make when they try to create a ‘container’ for any type of content. When they think that this is their task, they create websites with ‘Lorem ipsum’ in every text box. It’s not the right way to design an experience, as we lack the context for users behavior. If we lack context, we’ll never know exactly how the users interact with the interface. Also, this dummy text will give zero insights during testing.
I’m not saying that designers are supposed to be as talented in copywriting as journalists. It would be amazing, but it’s not realistic and necessary. However, they are supposed to know what kind of content the website will have: it will give them a basis for correct terms of reference for editors and copywriters.
Another designing feature I should mention is key metrics. Metrics give us answers on whether we are close to achieving our goals. Of course, influencing metrics is not only up to design. For example, a marketing campaign will help increase the number of people who will use our interface. Whether they continue to use it or not – it’s definitely what design has an effect on.
Design affects the indicators that reflect changes in people's behavior after they started using the product.
That’s where we should talk about interface patterns: what human-machine interaction looks like in different environments, such as graphical or audio interfaces or just physical interaction.
A behavior pattern is a persistent habit of people to act in a particular way in a particular context. When someone says an interface is user-friendly or intuitive - 98 times out of 100 it means it's something they’re used to. People don't have any intuition about interfaces if they haven't encountered anything similar before. So experience design should start with researching how people are used to solving similar problems.
How do designers cope with this task?
Most designers don't know what experience design is - and it's a major industry hangup. Most know UX, as the buzzword, so they have read something about it. But in reality, a designer still thinks in terms of functionality descriptions and interface images, and does not think about changes in people's behavior. In other words, instead of designing experiences, the designer is designing screens.
I guess the reason for that is that designers come from artists, people who create beautiful things. They think they have to work on visual representation and draw images and screens. They think that the main thing in design is how the product looks.
I went to the Architecture Academy because I wanted to be a book illustrator, not an engineer who designs people's behavior – and that's a very typical path. It would be great if designers came from directors, game developers, or psychologists: it would be easier for them to create behavioral scenarios.
Interface design is not about how interfaces look, but how people behave using them. User-experience designers are called ones because they design experiences. Screens are just the means to visualize that experience.
What about robots?
For typical tasks, we can use a huge amount of almost ready-made templates. In most cases we don’t even need design work – we just take a template, then edit it when it’s live according to users’ feedback. It’s a great approach that doesn't need designers or developers – we simply implement a best practice.
But when we face a non-typical task, we have to use our heads. I haven’t heard about any successful attempts to delegate this type of tasks to machines, apart from typical cases. However, designing via dialog interfaces like ChatGPT greatly expands the variety of cases that can be considered typical. That is, best practices are discovered and automated faster and faster, and there is less room for custom interface development.
We could also look for services that will take over the routine tasks, thus helping designers, e.g. ChatGPT or Gerwin.io for layout text generation.
Here are a few recommendations for editing images: WatermarkRemover.io for removing water marks, Upscale.media for scaling and improving quality, Erase.bg for erasing background, Shrink.media for compressing.
Robots also deal well with decomposing tasks. Just describe your goal, and share it with GoalGPT – the service will create a detailed plan to achieve it.
4. Developing prototypes
Before specialized software appeared, we drew interface prototypes on paper. This could take up to several days. Although it is useful for your brains, some people use it anymore.
Today prototyping is done in Figma (a most-used graphical editor). You can draw screens that are barely distinguishable from real screens: they look and behave almost like real ones.
Interactive prototypes usually include the core scenarios – the ones for which the product is designed. A music player is designed to listen to music, and a store is designed to buy something. So in an e-commerce application, the core screens will be the search engine, product card, home page with promotions, shopping cart and payment page. In this case, in fact, the application has not 5 screens and states, but, say, 300 – but they are less important.
How do designers cope with this task?
Prototyping is a native design competency. However, with Figma it is no longer a specific skill that only designers possess. Any team member can create a prototype, especially using plugins that turn a website into a one-click editable Figma file.
How about robots?
There are already many experiments where a system can offer several screens to choose from and create an application from them. For example, Midjourney or no-code services such as Bravo, Softr, BuilderX. Also, Skybox Lab helps creating 360° panoramas, Durable assists with developing sites by text query, and Taplink generates sites via an Instagram page.
Still, this only works for common interface patterns. For example, a music player is a common app, so the system is able to create something of this kind. If you’re working on an invention, a service which will make people behave in a new way, it will be a challenge.
There are more tools that facilitate making a prototype out of best practice. Possibly, in the future a machine will recommend a prototype idea. For now, designers have to create new practices themselves.
5. Testing the experience
Testing is usually done by psychologists or usability experts – but in my opinion, every designer should be able to test their interfaces. If you don't know how to test an interface, you can't understand how it works.
I can't imagine designing for a long time and not showing the work to a single user. I usually spend three or four hours drawing the interface and getting a prototype for future testing. This initial interface has only basic scenarios and more or less authentic content. Then I show it to people who are similar to my product’s target audience.
For example, we did this at Rideró, a self publishing platform I co-founded. Now we're designing the reading experience in the current interface – as we’re thinking of ways to change the way people read books via our service. I showed the prototype to people who read ebooks and watched how they interacted with the interface. As a result of testing, I found my hypotheses were not correct and my prototypes were not as clear as I’d thought.
The sooner a designer starts testing the interface, the sooner they will be able to create a high-quality experience that people will enjoy.
UI experience testing consists of several items:
- In-depth interviews – talking to users to discover details of their experience.
- Surveys – creating questionnaires and analyzing the answers to the questions.
- Web analytics – working with Google Analytics. It’ll help to identify what elements of the interface the user needs to reach so that we can say the interface is successful.
- Usability testing — observing how a person uses the product. This includes selecting people for testing, preparing questions, and testing hypotheses.
- Analyzing the results – evaluating how the interface works.
How do designers cope with this task?
It's pretty satisfactory, as everyone has a lot of experience using various services. Although quite often a designer does not have the full range of research tools: in large companies, a researcher is a separate specialist, and in small ones there is often not enough time and money for that.
What about robots?
You can find many survey and testing tools, but they are rarely fully automated. These services will help collect data and highlight potential insights. Ideally, AI could learn to draw conclusions from data cheaper than humans do. In reality, the environment is too diverse, so human mental efforts are more effective for this job.
We'll go on with the next 5 steps in the second part of the article. Stay tuned and follow us on LinkedIn - we post lots of content about design and AI.