The rise of digital placemaking has been supported by emerging digital technologies – from smart clothing to city-wide sensor networks – that go beyond the smartphone.
But while conversations around technological possibility are important, larger ethical questions have come to the fore. There are groundbreaking developments in the Internet of Things, augmented reality and smart cities almost daily, but recently the narrative has expanded from tech possibilities to one of governance, ethics, sustainability, privacy, trust, safety and security.
In July 2018, the UK’s Ministry for Housing, Communities and Local Government published the ‘Local Digital Declaration’ that seeks to transform the next generation of public services, enabled by digital technologies but underpinned by citizen needs, privacy and security. While currently looking at the digitisation of public services, it seems reasonable that the values and principles of this declaration could readily be adapted for the benefit of our shared public ‘smart’ spaces.
Ethics, governance and trust must become central themes for digital placemaking practitioners in the future. To establish our ethical course for the future, however, we have to start with the present.
The current ethical conversation
“Tech companies focus intensely on preventing crashes. A rigorous effort to anticipate what could go wrong is already standard practice for specialists in system reliability, which deals with “what-ifs” around computer failures. A higher standard for safety would simply do the same for “what-ifs” around human consequences.”
Zunger writes in the aftermath of the Cambridge Analytica scandal, in which the common practices around big data – mass analysis, psychological profiling, targeted advertising and interactions – were clearly demonstrated as a political force. These techniques are hardly unusual: they’re the bread and butter of digital marketing. Yet when they entered the political arena, deployed to influence democratic rather than consumer choice, a line was crossed, and data harvesting entered the public interest.
What’s interesting and crucial for us, as experimenters, placemakers and creators, is that the lessons of the Cambridge Analytica-Facebook situation are learnt. We need to work assiduously to ensure that our present and future cities do not follow the same path.
Computer ethics is nothing new, of course. As for back as 1948, MIT professor Norbert Wiener questioned the ethical issues surrounding information technology in his book Cybernetics. In 1973, the Association for Computing Machinery (ACM) adopted its first code of ethics.
Subsequently, the work of Luciano Floridi – Professor of Philosophy and Ethics of Information at Oxford University, Director of Oxford’s Digital Ethics lab, and Faculty Fellow of the Alan Turing Institute (the UK’s national institute for data science) – investigated the philosophies and ethics of computing in all guises.
Floridi’s work continues today, focusing on brand new tech and innovations. The difference between the technology of today and that of the mid-90s, of course, is the ubiquity of data and the extent to which it forms and impacts our day to day lives. Our decision-making processes are increasingly distorted by the sheer amount and accuracy of the data we have available: we’re more aware of changes in our environment than ever, in more detail and at greater pace, too.
For digital placemaking practitioners – those studying, shaping and augmenting the relationships between people, place and technology – ethics form a crucial part of the decision-making process for projects. So what questions should we be asking, and how do we ensure we avoid the moral issues brought to a head by recent revelations?
Ethical issues for future development
“Information Communication Technologies (ICT) have profoundly changed many aspects of life. They have had a radical and widespread influence on our moral lives and on contemporary ethical debates. Examples come readily to mind, from trust online to phone hacking, from the digital divide to a dystopian ‘surveillance society’, from privacy and freedom of expression to Wikileaks, from artificial companions to cyberwar.”
Professor Floridi suggests ICTs are bringing about a fourth revolution, a fundamental change in how we understand ourselves and the world. We are not disconnected individual agents, according to Floridi: we are informational organisms, sharing a global environment that is made of information. Our sense of self, environment and sustainability has to reflect that so many of our circumstances exist purely in the realm of information – they are not practical problems, and they do not have a concrete, permanent solution.
The aforementioned ACM code of ethics still stands today, updated to reflect these modern challenges. Its general ethical principles are:
- 1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
- 1.2 Avoid harm.
- 1.3 Be honest and trustworthy.
- 1.4 Be fair and take action not to discriminate.
- 1.5 Respect the work required to produce new ideas, inventions, creative works, and computing artifacts.
- 1.6 Respect privacy.
- 1.7 Honour confidentiality.
Other organisations are similarly trying to tackle these issues. In 2012, the EU established the Onlife Initiative – a research project that hopes to rethink the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.
The PETRAS Internet of Things Research Hub (a consortium of nine UK universities), meanwhile, is working to establish principles of trust and privacy at the heart of the IoT, retaining human oversight and restraining the automated, automatic use of big data.
The goal is to ensure that people remain responsible for and in control of decisions about how devices are used, and to what end, and to make recommendations on policy that foster responsible development of IoT devices and technologies.
Each of these movements is doing great work in trying to understand, govern and manage our relationships with technology in line with a set of ethical standards. But still they are aimed principally at technologists. Technology is just one element of digital placemaking, however, so when also considering people and place, it becomes more complex.
Many placemaking projects are charged with telling the story of a place, of being authentic and valuable to the communities that will use them day to day. To this end, we also need to ask:
- How can we be thoughtful about how we design our systems?
- Do we have buy-in from all stakeholders?
- Are we helping the environment?
- Do our projects hold authenticity and inclusivity dear?
- Are we telling the right story?
- Are we contributing social, cultural or economic value?
- Does our work enhance people’s experience of a location?
These are challenging questions for anyone working in the public realm, but they are essential to make sure the work we’re doing is valuable to our public spaces and to the people that inhabit them. Before we rush headlong into the unknown – putting technology before people, the apps before the application – we need to really question the ethical ramifications of our endeavours.
Society, like technology, is undergoing a period of dramatic change. In the last few years, we have seen information and communication become bigger and more complex than ever – unethical activity has become easier to recognise, yet perversely harder to stop. The public debate on and around the global infosphere is a reflection of broader, older political and social praxis – an extension of the world which is by no means free of its problems.
People are people: they’re struggling to adapt to a bigger and more complex world than they’ve ever lived in before. To ensure that world is worth living in, we need buy-in from the people who will be living there. Digital placemakers need faith and trust from the people who are going to interact with their work: we can secure that only by being truthful and honest.