Теперь Кью работает в режиме чтения

Мы сохранили весь контент, но добавить что-то новое уже нельзя

How can designers work and design the interaction between humans and smart cities?

TechnologyArchitecture+2
Maria Theodorou
  · 643
Первый
Applied Futurist creating tools & sharing ideas, online, on stage, on air, in print & in...  · 18 февр 2017

If you go back to the 1960s, if you wanted to make a machine do anything, you had to go to an air conditioned room, learn the machine’s language and encode it in very specific, very manual ways to perform a limited task.

Contrast that with now, if I say the word “Alexa”, then my Echo on the other side of the room is going to wonder what I want. One time in two it will actually play the song I want if I shout at it. But you can absolutely see the direction of travel.

It’s an important milestone in that journey from manual introduction of instructions, to point and click, to touch and swipe, to voice and gesture to actually no interface at all.

We already have some devices that do this in a limited way – the Nest smart thermostat in some ways is a smart city device. It uses sensors to learn your patterns and behaviour and sets your heating accordingly. Once you’ve done the basic set up, there’s literally no interface – it’s autonomous. It’s learning and responding not to manual input from a human being, but to a rudimentary understanding of human being behaviour and what human beings want. It knows when you’re in and out, it senses from your mobile when you’re getting close to home so it turns the heating on so it’s nice and warm when you walk in the door. That’s kind of where we’re going with the human interface to the smart city.

Because nine times out of 10, we don’t want an interface at all. We just want things to respond to our needs without even speaking them. And that’s really where for the most part, the smart city should be.

There’s a corollary to that – you don’t want things happening on your behalf with no way of discovering why. If you’re going to remove the interface and make things just happen because the city thinks they should, you have to be absolutely transparent about why they happened. Everyone will need to see the ‘working out’. That’s going to an interesting step for cities to get to grips with, given that cities are this weird conglomeration of public and private.

  • When people talk about smart cities, they have this idea of a council or government-owned infrastructure operating everything. But a truly smart city cannot be that – it’s much more like a garden. 

When people talk about smart cities, they have this idea of a council or government-owned infrastructure operating everything. But a truly smart city cannot be that – it’s much more like a garden. You’ve got a garden with a bunch of different organisms living in it that may be fitting into some sort of common plan that a designer laid out. But over time they grow, they expand and they behave differently, they respond differently to different conditions and there’s going to be a constant process of tending and weeding in order to maintain a level of harmony and consistency with the original design.

Bear in mind, 99% of people will never look under the hood. Transparency is important in principle, but a very small number of people are ever going to even look at it, let alone use that information to build new things. And again, the smart city is not going to be this deeply centralised, government-controlled, ordered structure of people with clipboards. It’s going to be a much more collaborative effort. It’s going to have to be where one group, the government, council, whoever, defines a set of spaces that everyone has to correspond to and defines a set of data that may or may not be shared, and services that you will have access to. How those things are all applied is going to be much more for communities and corporations to define.

There are probably three principles that are going to be the most important.

The first one is if at all possible, have no interface at all. You don’t want to have to have an interface with a pavement, a bus or a street light. At the moment, we have this horrible habit of building apps for everything – I don’t really want an app to force me to turn on the street lights as I walk down the street. I just want them to come on.

Second principle is, show your working out. There should be complete transparency of what algorithms are doing and what sensing activities are happening behind the scenes in order to make those things happen so that people can understand them.

The third principle is hackability. People should be able to change the way the city behaves towards them, within reason. That reason you could probably define as Asimov’s Robot Laws. You don’t want people hacking the city to harm others. But they should be able to create services that changes the way the city behaves towards others, either as a commercial or a community service. 

https://www.youtube.com/embed/Ck22DpAQJGU?wmode=opaque