Analysis
š¬š§š¤Ethics and Intelligence at the Times of Artificial Intelligence ā Part 2/2, Asymmetry

The First Part of Analysis at this Link
Two-part analysis on how the relationship between ethics and intelligence, at the time of artificial intelligence, has become almost impossible and needs to be redefined based on new parameters: first part, the effects of information asymmetry.
The complexity of the system
In the first part we saw as digital transformation and globalization have changed the traditional intelligence framework from the early 1990s of the last century. The change, progressive and in progress, has been vertical, horizontal and oblique. Beyond the aims imposed by great strategy the different levels of the same one interpenetrate and merge : theatres, operations, tactics must be available in a mutually flexible manner at any time. In this perspective and as necessary, any technical tool and any actor can play a functional role in the strategy.
Labelling has ceased to make sense because they have ceased the edges, technical and operational, which they defined them. The investments needed to maintain the competitive have become such that public-private collaboration, for men and means, is an indispensable factor in a context of massive spread of information.
The system then climbed into the quantitatively and qualitatively complex. It had to evolve as a result of technological innovation and, at the same time, it had to do it in the most advanced forms in order to compete. Intelligent systems, AI, from support of elaboration have become paradigm needed to discernment of intelligence. Automated knowledge production, which is involving all sectors, puts intelligence to face problems that others do not have.
Ethics, norm and innovation
The debate between ethics and innovation (also understood as research) is not only old as the hills but also the amount of material available to build, or update, its own opinion is proportional to seniority. Finalized and recent analysis are helpful because they define articulated and shared principles by researchers of different latitudes. With the pragmatic intent of dealing with the ātimeā component one aspect is underlined, obvious, and that is that innovation in the socio-cultural context, therefore ethical, in which it operates makes non-existent actions. They are always antecedent to the re-actions of the context itself (reactions of ethics) aimed at detecting and framing the effects. This causes for a (variable) period information asymmetry between suppliers and users of innovation: the former have the knowledge, the latter are passively subject to the effects not only without mediation but, given the novelty factor, even without references to contact for to be helped in mediation.
To the objection that this has always been, because inherent on the process of innovation propagation, we oppose four factors that digital transformation and globalization have changed compared to the past : the propagation time, now equal to zero, the possibility of mass dissemination, background noise and the oblivion of the data. These elements interact with exponentially information asymmetry : innovation is shared between specialists in real time; equally in real time it can give rise to podery developments over niches entirely unknown or deliberately concealed ; it remains unstinguishable in the real potential of the effects due to the amount of information, even (intentionally or otherwise) contradictory, present in social networks for those who are not specialists ; the effects are not eliminable, as the digital data is never destructible with certainty.
In other ways, the relationship between ethics and the law notoriously is not mutually speculating. The first reflects the values of a collectivity the second is based, and conditioned, by local cultural rules that lead to permitting or prohibiting behaviors. If you assimilate the behaviour of innovation to its effects is intuitive as the asymmetrical component at the expense of users increases : the law takes time to highlight, evaluate a behavior and consider its effects variably licit.
An observation that often occurs, especially in the Anglo-Saxon world, is that the law is much more rapid than ethics in grasping the distortions of novelty and put a Band-Aid at best. This line of thought worked when Mooreās Law was the radical factor, and that is, reasoning in terms of computational speed enhancements : now the parameters have changed, the speed is just one of the incremental factors (however important) that leads the algorithm to produce cognitive output. Others are, for example, accuracy in the definition of the domain, the direct and indirect libraries available, the bias detection and correction systems, all things which, with the same capacity for computation with it, have little to do, except for system or paradigms changes. The law is now harnessed in a multi-dimensionality of input that prevents a reaction at least more readiness than ethics, as it had previously occurred.
Here then you return to the problem of the time gap : who manages innovation has all the keys to manipulate it in their own favour and who undergoes the new one doesnāt have the defense keys, not being consciously aware of the problem for long. Before ethics and legislation are able to process and produce mediation the gap is able to give rise to actions with uncertain potential, in terms of licit-illicit/ethical-unethical, managed by a few on the head of many. Their value goes way beyond the concepts related to the time expressed in the theories of business and systemic innovation (see Schumpeter and connected) and has a own path because it impacts the interests of weak stakeholders with less specific knowledge, and that is the end users. In other words it is difficult to imagine initializing a movement of opinion in favor of observation or regulation of a cause if you are not aware that it exists.
In the intelligence, it doesnāt matter whether itās public or private, the scenario is this. AI applications, at the different stages of the information cycle, make the ātechnically possibleā for reasons of investment and engineering mastering a Graal in the possession of a few who repours its effects on many unaware. With the proper differentiations between national security and private operations, the problem is not to evaluate behaviors, or sancting (or predict what to enshrine) but the lack of awareness in the target.
Engineering mastering also allows, all out on the table and for a while, what is called āstrategic ignoranceā, and that the hypocritically deflection of the responsibility toward imaginary auto-bias learning and black boxes of different nature, namely the machines. Notoriously, the algorithms in the initialization phase are neither designed (and codified) by themselves nor are they left to themselves, in the phase of self-learning and online production of output, in the behavior if it is not in line with the goal that we have set ourselves.
Frankly, the writer has little interest if periodically to someone of the FAMANG, or the protagonists of minor cabotage, a million-dollar penalty is imposed on the basis of the GDPR or other legislation for unlawful conduct. First, because the reprimand in cash are the cost, already in budget, of the risk of the action ; second, because fines are of such amount that they do not damage corporate profitability, so they do not have any deterrence to future behaviour ; third, because the damage is done and there is no way to go back.
This is not about digital justicialism: it would be stupid for the very nature of innovation. It is, however, to devise processes of informational protection of the user who at least, and pending cultural and regulatory evaluations, put it to the current of phenomenology.
Unless you rely on the cyclicality of events, like Snowden or Cambridge Analityca who accidentally discover Pandoraās box always and however when the damage is done, there is now no informative protection. There are excellent think-tanks and active organizations in the field of digital right, including the investigative side, at the local as supranational levels, but they are set up on pre-revolution models of the production of knowledge. In their niche they inform a few elected representatives and make reference to ethical, political and legislative decision-makers according to the times of the latter : no one has yet adopted a model that adapts to the speed of AI implementation in intelligence and uses it to inform in useful times the mass of users.
Intelligence and ethics
The relationship between intelligence and ethics has been defined pragmatically, and significantly, by Michael Hayden (at the top of US intelligence on the latter decade, in command of NSA, CIA and other) about ten years ago when he transmitted the image of a target, with the center representing the āpolitically sustainableā and, progressively, three concentric outwards representing ālegalā, āoperationally sustainableā and the ātechnologically feasibleā. With an effort to accustom the āpolitically sustainableā locution with āethically sustainableā the argument might play : today the informational asymmetry raises doubts.
If we talk about national security, state intelligence activities, it is shared that the agencies must comply at the inputs of policy makers and regulations (special and not) for them, whatever the nature of the regime. If the information asymmetry allows ātechnologically feasibleā, in the absence of regulatory and political assessments that deal with the effects, operators (of any level) will be quiet, as to accountability to the tort, the single subjected to a ādoableā a little less.
Second point, what meaning ānational securityā (which are its perimeters) in a context of lack of norm or political awareness of applied technology if it is āoperationally sustainableā in related areas of public interest, as common crime and / or tax? Also in this case the public actor who operationally applies the technology does not commit an unlawful act (it does not yet exist) nor is politically incorrect (with respect to a political input wich is also non-existent), indeed performs his work to the best : in what way is the individual protected?
If ten years ago the paradigm was āto collect as much as possible cause before or after it might be usefullā now AI makes the technology available to make the most of it in real time : if one time massive collection was aimed at storing, in view of the necessary future uses (aimed at countering illegal behavior) now it is used to perform behaviours or massive behavioral predictivity. Another time, in what way the individual is protected if these effects bypass the two central areas of the Hayden target the effects are not known? Hereās the target needs to be reviewed with new perspective.
More recently Sir David Omand, (former head of British GCHQ) in the course of the review of the Investigation Power Act issued in 2016, said that the relationship between ethics and national intelligence must be that of a ājust warā. To underscore the concept David Anderson (Queenās Counsel in the context of terrorism), in the same process of amending legislation, said about the pervasiveness of intelligence that it must have the highest powers and that the problem is when to exercise them. The final version of the ACT made its own these and other suggestions with the consequence that Haydenās target has been erased in its two areas closer to the center, updating it to the factual reality. It does not matter if the ACT was a civil rights aberration and is the most disputed measure in recent years in the United Kingdom : the purpose of eliminating the ethical problem has been reached by allowing, practically, everything.
Thus with national intelligence the point is that citizens (users), in a ātranslucentā area as Michael Leither (chief of National Counterterrorism Center under George W. Bush) called it with good foresight, are aware of being massively tracked, surveyed, archived in every digital, biometric and (progressively, shortly) bio-genetic performance. No one, however, is able to know how these data are used, nor by who and even what behavioral and predictive deductive techniques are applied and for what purposes. It is the unresticted ājust warā referred to in the previous post.
For private intelligence, few lines are enough. There are only the perimeters of āoperationally sustainableā and ātechnologically feasibleā all the rest is left to individual ethics which, as from daily evidence, is (to use an euphemism) extremely elastic. On the other, if the context is the era of ācapitalism of surveillanceā differently it could not be, otherwise it could not be for the survival of the profit model itself. Users remain with the ālegalā that, roughly and according to the different legislations, allows you to be informed (when itās good) with a three-year gap.
So what to do?
On the one hand there are the state agencies that need to apply the āunrestricted intelligenceā because they consider it politically necessary to their own ends. In doing this they use the resources of private intelligence, for which the term āethicalā (when taken into account) is an oxymoron compared to the meaning of ālimitā. If this was not the competitive advantage of one and the other would be impossible. In the midst the mode of development of innovation that allows neither a shared massive information nor the apttation in useful times of rules in order to safeguard the potentially unlawful conduct. On the other side the users who progressively, under any regime, learn with a delay that makes it almost useless the same awareness, that they have entered surveillance bubbles of the habits for which Orwellās 1984 or Black Mirror are childrenās fairy tales.
Fresh Surveys indicate that the issue is not at the top of the concerns of citizens, even in countries for which high technology penetration combines a high rights sensitivity. The scenario does not surprise for two reasons : first the issue is young and, second, the disclosure is deficient for what has been said.
In the face of unrestricted intelligence and the consolidation of the model of surveillance capitalism, the response should be a typology of social counter-intelligence, widespread, collective and āunrestrictedā, which allows for the creation of effective movements of opinion by temporal reaction, on the model of what is advent for the environment.
At the moment there is no such thing.
This is adaptation of a neuronal Italian/English AI translation by IBM Watson.