Fake news, online harassment, extremism – digital platforms are invariably at the centre of such activities and that’s because of the flaws inherent in their design. But a better future is possible, says Rob Trono.

It’s no secret that parts of the online world are now not a great place to be. Social channels are blamed for creating spaces where extreme discourse can thrive, where fake news can spread and where people sometimes feel so unhappy that they leave platforms altogether. At a time when we need online connectivity more than ever, it’s vitally important that we’re providing online spaces that help, not hinder.

Design flaws are built in

One problem is that the major social channels are not policing content. Another fundamental flaw is the way that platforms and environments are designed.

They’re perfectly crafted – indeed, we’ve spent more than a decade perfecting this – to attract and keep users via almost any viable techniques. Reward theories, game design and more hook us in and keep us there. This has led to huge growth for the platforms, but, it now turns out, has been terrible for mental health and public discourse.

Marginal change and nuanced opinion are vital to build a better world. However, much online design is explicitly designed against nuance. ‘Like’ and ‘share’ buttons mean that while great content is rewarded, a lot of polarising content – the sort of thing that generates an instant, strong reaction – is rewarded too.

On top of this, the app, tech and startup ecosystem – investment, funding, buyouts etc – all value the same thing: user numbers, retention, eyeballs. In the desperation to keep people logging on, design becomes ‘noisy’; content is created faster, quality drops.

So are we in a permanent negative spiral? How can we improve online design to encourage and reward different behaviours?

Platforms need to be less sticky

The biggest shift we can make is to increase our awareness of what we do and ask ourselves questions. As designers, we need to be aware that we have an unconscious bias; we need to continually interrogate our work. Are we helping people or tricking them? Are we helping others or just pushing a product? We need to be continually wary of creating products that fit our needs and hypotheses. We should also be mindful that we are offering users control and transparency over precisely what they’re interacting with – what the product is doing and how it does it.

At its heart, solving this is about giving back control to users. And not the Facebook version of control, where you, the user, have to set a huge number of controls for all elements of the site – for precisely who sees everything, who can share anything. It’s impossible to navigate, doesn’t get used, and barely changes the information users are served. Controls that users set need to be simple.

But more than that, it’s also about giving back control to people over their behaviour; about making things less ‘sticky’; lowering the dopamine highs and subsequent lows that users get from refreshing, liking and more. It’s about changing notifications, so users don’t receive 100+ a day, and focusing instead on understanding what people are responding to and what makes them feel overwhelmed.

Some platforms have already started to do this. Instagram now hides ‘like’ numbers and Pinterest has removed content on self-harm and suicide, now actively serving content that might help users who search for these things.

Can a model that isn’t purely addictive succeed?

It has to. People are tired of being targeted. The popularity of the digital detox shows that the only way people know how to deal with being overwhelmed is by switching off entirely. Even now, in the midst of a global crisis and social isolation, many are feeling the need to limit their time consuming content online. If they need to get away that badly, something is very wrong.

What marketers can do

The tech industry isn’t thinking properly about what people want, or the role it plays in their lives. It’s currently got people addicted, and happy for people to either chase their next hit, or detox altogether.

We’re at a tipping point in online behaviour. How we move forward is going to become even more important in the next few years. Platforms are going to be policed, whether they want it or not – particularly now Ofcom can regulate social media. And as politicians and decision makers discover more about the harm being caused, changing designs is going to be a vital part of this.

Ethical decisions are often overlooked in the quest for reach, engagement and awareness. Just as many businesses dislike what is happening on Facebook, they continue to advertise there. But if platforms are proactive in improving positive communication, giving people back control, there are benefits to everyone.

We’ve had rapid growth on a huge scale. We had years to think about the implications of the printing press. But we’re now constantly chasing the next newest things and we haven’t had a chance to decide if this level of connection is what we really need as a society; whether it’s good for us or not.

Now we need to challenge this thinking. We need to look again at how interaction design happens and challenge ourselves to think about this. What does human-centered design really mean? We need to look to people for answers, rather than have tech firms decide that their solutions, whatever they are, are the only thing on offer. The internet is one of the greatest inventions we’ve ever had. Its potential is huge; it’s essential that we stop misusing it.

In the meantime, we have to operate within these systems as marketers and advertisers. It’s our job to highlight issues, and to try to set the system up to get around the worst of these things. To work for the right clients, and to proactively try to make things better. To truly be ethical, we need to beat the system.