Skip to main content

The Ignorance Wall

/papers/img/iron-man-gif

Point 1: The Ignorance Wall
Argument: There is an inability to articulate constraints on definitions beforehand.

Intro

Simply because we embark upon the unknown, it is impossible to articulate the constraints around what is wanted in the first place. A communication process is necessary to tune into what is desired and mistakes are made along the way. Error is the natural state, and the path walked is moved towards being less wrong. Hopefully, we can decrease the time spent in error, and we can decrease the penalties/punishment from acting in error.

This happens when youth experiences new sources of pleasure. The first time comes with little idea of the limits or constraints that should be placed around the pleasures, and people can get caught in them. Aside from the obvious alcohol, drugs, and sex, with hunger, for instance, it’s possible to feel hunger and think “Eat,” then eat until one is full without thinking there is a limit to one’s fullness. To avoid obesity, eating must come with a constraint, such as portion-control, that might say, “eat one portion of vegetables and one portion of meat.” These are errors in constraints.

There exists, also, errors in specificity. Specificity exists alongside the errors of constraints. At face value, its effect can be mistaken for being the same error, but the cause comes from a different source. For example, among friends, family, and longtime colleagues communication is tight. A lot of the underlying assumptions have been reinforced, and if one is to say, “Pick up batteries from the store,” they know to get 2 packs of AAA’s from the grocery store on the corner. New guests to the conversation need the extra details, and if the speaker is undisciplined in knowing his audience and what assumptions are known and unknown, the detailed information will be lost and the person fetching batteries will be left hanging in ignorance. They will either return with the wrong equipment or they are responsible to ask questions like, “What kinds of batteries?” “How many batteries?” “Is there a good store nearby to buy batteries?”

Speaking Intentionally

And being a disciplined speaker, and understanding all the audience’s entry points to the conversation is a challenging task, possibly even impossible. They are made available only through best guesses, which improve through empathy and familiarity with stories, or a responsibility on the entrant to reveal ignorance and ask questions to get up to speed. When communities don’t acknowledge how their communication has evolved and tightened over time, their community becomes inaccessible to newcomers. The people on the outside carry the responsibility to ask all the clarifying questions so they can get accepted into the group.

I believe this to be one of the core contentions within the Black Lives Matter group. They wish for other groups to accept a greater responsibility in loosening their language and familiarize themselves with the Black-American stories, so it is easier to meet the BLM communities a little closer to the point where they come from without the Black Americans being responsible for asking so many clarifying questions.

They don’t seek convergent logic either—one that forces assimilation to the dominant group. Our ability to understand one another, and the solutions we create, are beyond those of the Christian Missions in America’s early days, and they’re beyond the reeducation internment camps of the Uyghurs in China. Rather, we can create new cultures at companies and in the America’s legal system that allow for the nuance of all the strange and weird cultural behaviors each person in this country possesses.

There is a great deal of error in communicating details from person to person, and it is certainly going to be present in communicating between man and machine. The solution people have come to play out is one of negotiation. People form pre-built stereotypes of others, and engage in a period of back and forth exchanges to break those assumptions to reach a point of better understanding. In short, people should be prepared to engage in a dance in order to get past errors of assumption.

Examples

In particular, there are errors in specificity and constraints in human and machine communication. Beginner coders learn quickly that for-loops and while-loops can iterate infinitely, and they learn how to put constraints on the loops so the computer isn’t stuck, forever, in a loop. Like with one’s own personal communication, like hunger, or with other people, or with a machine, communication takes a period of back and forth exchanges in order to come to an agreement for what the intended outcome should be.

In one of the most advanced computer translating examples to date, openAI demonstrates its new (2021) Codex where people can use everyday language to instruct an AI to write a web application. They show how it takes a little back and forth between the programmer and the AI to, together, create a web application.

Clip: OpenAI Codex Live Demo

In the clip, the AI doesn’t exactly understand what is intended and executes the command in the way it first interprets it—as any one of us would. In this example, the figure of a man can be moved to outside the viewing window of the screen. With a little back and forth, communication is achieved and the AI is executing code to do what the user wants.

In another real-world example, Government is another place where a stop condition has failed to be set. In the codes of law, people pass code with bad stop conditions and the body of code continues to grow. And there is confusion when addressing this under the big categories of big or small government.

First, it needs to be stated that people pass legislation to transform the Government, placing the Government inside our control.

People can treat the code of law like managing software code:

  1. There is a garbage collection system on the code to prevent the code from forever expanding.
  2. There are strategies in place to avoid Technical Debt.

Big and Small government can be divided into two problems:

  1. What is the scope of responsibilities the government should have to offer its citizens?
  2. Does the written code align with those responsibilities the Government should have?

If people want a limited Government (which everyone wants— the question is just, “To what degree?”), and they don’t write stop conditions in their code, then a divide exists between the ideal state and the actual execution of creating such a state. What is occurring is: people want one thing to happen, but, their strategy for doing it, didn’t produce the outcome they wanted. This is fine and happens. They have walked in error, and hit the wall of ignorance. So it’s time to learn and try again.

Trump did address the Government’s kind of infinite, while-loop-like error with his “2-out for every 1 in” rule. Maybe the solution is not the optimal way to limit the ever-expanding responsibilities of the government, but it is one way to address the problem that does not exist yet, and, for the time being, something to address the problem is better than nothing. We can learn to better handle that problem as time goes on.

Like with the government's expansion, or with the AI making assumptions to build a web application, or putting limits around our own pleasures, the path towards understanding takes time and lots of negotiation.

If AI is to do any harm to people, it can occur through these moments of negotiation. It will do so by denying there exists a mutual point of understanding, denying people the ability to negotiate, or by denying people the time required to carry out this negotiative ritual.

In order for AI to work well with people, it:

  1. Needs to have a negotiation process between it and people
  2. Accepts that there can exist some mutual point of understanding
  3. Allows time for people to adjust and to adapt