Our title in today’s contribution reflects a constant concern in all areas of society. From citizens at every moment of the day when we become consumers, going through the areas of politics in which decisions have to be made, therefore, it must be regulated, as the European Commission has to do at EU level, or the government of Germany or the Spanish government in their respective national powers, to finally also be the subject of debate and discussion in all business areas. Therefore, it is not something that is being talked about in the shadows, but rather it is a permanent claim in the present, here and now.
Technological innovation brings both benefits and harm to our societies, for which there are countless examples that we can consider. This is the case, merely descriptively (at this point we are not making an assessment yet) about encryption technology. It is indispensable to secure banking transactions, private communications and personal data in the cloud. But all this good side of the protection of our great asset, which is personal data, has its negative counterpart, which is that it makes it very difficult for the control that the state security forces and bodies have to carry out, as well as the that are international in nature, such as Interpol, in order to have the ability to detect and prevent organized crime.
A capital aspect of encryption as a technological value is that it has been a crucial element for the various forms of communication available to global society today. But when we look at the big players in the tech business sector like Google, Facebook, Apple, Twitter, etc., we see that information can be curated by these platforms, and their complex algorithms can allow users to see only what they see that they already like.
Is there then an information filter?
Of course there is this invisible information filter that can hamper the free flow of information and potentially polarize people’s political views and preferences. And this is not a minor issue. Hence, in that permanent discussion and debate that we referred to above, it will continue to take place and it is also necessary. We would say that it is essential in the spheres of political and regulatory decision. In short, the search for an ethical and regulatory framework on technological innovation.
Information technology is one of the examples of the so-called “disruptive technology”, an expression that we owe to Christensen in 1997. But let’s not forget that NT’s and their disruptive processes include artificial intelligence, robotics, 3D printers, blockchain and cars without driver, all of them factors that can profoundly alter the existing markets and how is the reaction and behavior of the consumer. We also know that in order to see its progress (from a technical point of view) and extension (we are talking geographically), studies are being carried out to have a kind of dashboard on where and how the various industrial dynamics that impact the economic and social spheres of the countries.
How far should the ability to limit innovation go?
We have been asking ourselves this question especially in the last three years, given the acceleration of technological innovation in all fields of knowledge, first the test and immediately the application. But with the aggravating circumstance (in the sense of the speed of the processes) in that there is no time to see the social consequences of a certain application (we are talking about technological tools in general in the global field of IT) for them to come after this innovation that is at the forefront, a second, a third and so on, and at the same time trying to channel the first of them legally in some way within one of the existing regulations, or on the contrary, given the tenor of the same, it is necessary to reform an existing law or even legislate from scratch because there is a blank space that has not been contemplated. But also taking into account that the one that came after (the new or new innovations) will not give even time for reflection regarding its applications and consequences for consumers.
Therefore, any framework can have two aspects: support innovation or limit it. It is not easy, but this reality must be addressed seriously and in depth. Because from the leading business sectors that are leading the scientific and technological initiative, they are always fighting so that there is no type of restriction to creativity or the innovative process. They believe they can disrupt these necessary phases of technological disruption in a very detrimental way. But to safeguard the fundamental values that protect the public interest, you cannot lower your head and just let it happen, because you have to establish the ethical and regulatory framework, having to answer a series of important questions.
The questions that commit us the most
First of all, who is responsible when technology fails? Not least, the one that asks where to intervene, in terms of regulation? Next, a very delicate one that questions whether we can rely on shared principles and self-regulation or should the algorithms developed by the industry be open to any supervisory control?
Although all these questions are valid that they are formulated from the areas of political decision, from the chambers that bring together the interests of consumers, especially those areas that are called “consumer defenders”, it is also necessary to take into account with which regulatory instruments is counted internationally. What international standards should be applicable to remote computer searches by law enforcement agencies, and how do these standards relate to sovereignty concerns?
No less important is asking ourselves the question of what kind of robotic technology should be allowed in applications ranging from care for the elderly and disabled to law enforcement, or even what technology is available in armed conflicts in certain regions? And already entering the field of human rights, the correct and necessary formulation is what would be the concrete relevance of human dignity and human rights in the regulation of the use of innovative technology?
The doubts that are presented to us at a social level
When a certain activity is legally regulated, it is because previously problems, discussions, debates had been taking place and, of course, because situations had arisen that were not contemplated by any regulations and that brought about damages for the citizens, always taking into account the general interest that must prevail to legislate.
Hence, the doubts in this area in which ethics and innovation collide, we have to take into account:
1º) What are the ethical problems around technology?
2º) What are the most important ethical issues in technology?
3º) Is personal information being used properly?
4º) Is disinformation being abused through “fake news”?
5º) Is there a lack of clear supervision and acceptance of responsibility?
6º) How is AI being used?
7º) What is the degree of progress in autonomous technology and if it is being legally normalized?
8º) What are the ethical frameworks of application for the respect of all users, whether they are consumers or clients?
9º) Is a moral use of data and resources being made by organizations that have access to an unlimited amount of personal information?
10) Is a responsible adaptation of disruptive technology being made?
It is obvious that not all these questions that we are formulating (they are not our own, but as it is colloquially said, “they are floating in the environment”) are the ones that encompass the problem we are facing. No doubt there will be many more. But what we intend with our contribution today is a reflection on a global social scale and especially in the areas of political decision.
At the AEEN we believe that ethics is and will continue to be a major issue for the future development of our society. It is a fight in which we must not give in, the way we treat others, how we use information, how we interact with employees, or how we are managing resources, and especially, each of these issues that have value for themselves, to see how they fit into the concept of sustainability that we also defend from our association. We prioritize everything that involves sustainable development, because it impacts the world around us and ends up affecting the way we see companies.
In fact, it is not uncommon for us to find inappropriate treatment of people and the communities in which they live in all areas, which is in accordance with the principles of corporate social responsibility in which the concept of sustainability of the economy undoubtedly enters and our care to improve the physical environment (the planet), end up making the difference between business success or failure. That is why the large technology corporations, always subject to scrutiny by society, are the ones that have taken the lead in implementing ethical practices in their decision-making processes.
Major Ethical Issues in Technology
a) Improper use of personal information
We are living in an exceptional era of information technology, in which everything revolves around how companies use all of our personal information that they have. As we browse through Internet sites, we decide to make an online purchase for which we are entering personal data and a series of preferences that we do not even realize are the measure of our personality as consumers, we can fall into unscrupulous hands with the privacy that corresponds to this type of information. In addition, we are permanently interacting with different online businesses and we participate in social networks, which also means being exposed to our personal data being public and/or misused.
Companies often collect information to hyper-personalize our online experiences, but to what extent does that information really impede our right to privacy?
When can a company be said to be going too far in its use of personal information? Because without a doubt for companies, it is extremely valuable to know what type of products are searched for and what type of content people consume the most. But it is also important for decision makers in high political and regulatory spheres to know what kind of social or legal problems are receiving more attention. Seeing whether or not there is a clash of interests in which the data that is often exploited so that companies or entities can earn money or advance their objectives, to better reach a customer base by knowing their preferences in depth, ends becoming a marketing of personal data that has previously been collected by said platform.
b) False data
In recent years there has been an alarming event, such as what happened with the alleged disinformation and falsification of data that occurred in the United States elections, accusing Russian espionage of being behind at least a degree of influence in the results in a specific North American state. The effect created a polarization that has had far-reaching consequences in the global economic and political environments.
Since the beginning of the Internet, the way in which information was accessed just before the network, since it has nothing to do with the type of current access to that information. But today we are constantly inundated with events and news in real time as they are published.
Anyone, especially those who are considered celebrities, can spread opinions on social media without checking the facts. But this also happens to politicians, business leaders, etc. that more than one has fallen into the trap of not confirming or verifying whether there was any credibility in the open and public discussion in which he had entered, to which more inaccuracies are later added and spread even more despite their lack of precision or inaccuracy. Information no longer undergoes the arduous validation process that we used to use to publish newspapers and books.
c) Lack of Supervision and Acceptance of Responsibility
In the business tissue, we know that organizations carry out operations that are implemented by their teams and many others that are outsourced, in the case of companies that work in software, big data, systems engineering, networks, etc. The question is where the responsibility lies for these operations that are not being operated directly but are subcontracted to third parties. We are talking about our own technology and that of third parties that we rent or buy. As a result, there is often some confusion about where the responsibility lies when it comes to governance, the use of big data, cybersecurity concerns, and the management of personally identifiable information, or PII. Whose responsibility is it really to ensure that data is protected? If you hire a third party for software that processes payments, do you have any liability if your credit card data is breached? The fact is that it is everyone’s job. Companies must adopt a perspective in which all collective parties share responsibility.
This is what has recently led security experts, and also part of the political class that is most sensitive to these issues, to admit the need for a global approach to regulation and the establishment of a necessary ethical framework. There can be no fractures between policy making and widespread data mismanagement. In the same way that we are fighting daily for a sustainable society and organizations, we should make the same effort that institutions, governments and companies from all sectors make an effort to unite towards a plausible ethical framework, which is not seen as a interference in creativity and technological development, but as a care of the fundamental rights of people. The current situation is necessarily improvable.