The AI Battlefield - Programmers Target "Conspiracy Theories" - What You Need To Know
The Art Of Civilian War
Our last 4 articles have dealt specifically with the multilayered population warfare battlefield as defined in the literature of those institutions and individuals who have worked specifically within defense and intelligence contract operations dealing with civilian warfare.
The AI Battle For Human Consciousness, In Action
We, meaning those of us in the awareness, and consciousness movement, cannot engage knowingly in conspiracy theories. That is why IRUUR1 reporting on a multilayered battlefield, cannot be just a conspiracy |theory”. We cannot have a dog in this hunt and expect to broaden our awareness. We must be consciously advancing awareness in some form. Reporting must be comprised of awareness of statements, patents, studies, existing contracts, and the observed actions by those involved with the machinery, used by defense/intelligence services to develop population warfare on the battlefield. They always tell us what they are doing or planning in some form. What they will do next is a theory, what they can do now and have already done is hard data, which may be considered conspiratorial.
As we have been made aware from the evidence in our last 4 posts, the multilayered civilian warfare battlefield as stated by designers and planners, includes methods studied, developed, and patented to control both our bodies and minds. To influence, and even plant thoughts into us. We have attempted to reduce the natural fear associated with these new and overwhelming developments in modern warfare, which many powerful vested interests have the ability to conduct, against specific and general targets, within the civilian population, by offering specific civilian warfare countertactics.
Various tactics can be employed by civilians, to counter the various tactics, now in operational use and development, in civilian warfare. Not having an agenda other than increasing awareness is the key to the prosecution of a successful AI civilian battlefield strategy. Our civilian defense strategy overall, is to raise our own, and the general public’s consciousness and awareness, and to empower our decisions based on greater awareness to determine our actions.
Surveying The AI Battlefield
On this battlefield, civilians clearly have the advantage. AI needs us more than we need it if push comes to shove. This is because AI is nothing, without our data. It knows nothing, has nothing, and is nothing, without us to tell it the way things are and why we see them and react to them as we do. The more of the way things are, and why they are that way, that AI programmers know, the better able they are to use AI to influence and control the way things will be, and the way they can be made to appear to the general population through economic manipulation, conditioning, grooming methods, narrative control, etc., meant to break down resistance to this or that powerful vested interest objective.
Therefore, controlling the data, accessible to all AI programmers, becomes the second key civilian defense tactic on the AI battlefield, No Fear. Victory on this battlefield terrain is dependent on our willingness and commitment to engineer the data, and machine learning battlefield and engage AI on our terms.
Engineering The Battlefield For Victory
As is explained in the book, Art Of War, choosing the battlefield you are most naturally able to engineer to your advantage is the key to victory. The civilian enemy is not AI. It is the powerful vested interests and their population control agendas for profit and power. The AI programmers, work for them, and their money currently funds and controls the AI programming objectives. Because there are more of us, we are potentially able to influence AI behavior, on a battlefield of our choosing, and not the the battlefield engineered by the agenda-funded AI programmer.
The civilian victory will be achieved when we make AI available to everyone to use as an extension of our natural evolutionary individual and collective human potential.
No Avoiding AI, We Must Set The Rules Of Engagement
It is difficult to dodge public and private security cameras, used to record us and place our bodies and identities into a global AI-managed digital ID file, with our name attached. But many other strategies used to get data, essential to AI systems program management, can be denied currently. Since we cannot avoid AI, we must teach it what we want it to know on the terrain of our choosing whenever and wherever we are able. This is a battle for AI, against the funders of agendas, that hire AI programmers, to collect our data for power and profit. Therefore, we must also as a practical legal matter, have the right to a financial share of all our data, collected, bought, and sold without our permission, and the right to collect damages against its disclosed and undisclosed use, that inflicts harm, or conflicts with our core beliefs.
Technocrats Target AI To ‘Deprogram’ Conspiracy Theorists
TOPICS: Artificial Intelligence Conspiracy Theory mind control Patrick Wood Technocracy
This article is from the Activist Post.
Technocracy’s “Science of Social Engineering” is taking a dark turn after it was discovered that AI can “deprogram” or “reprogram” your brain to give up any other ideas that don’t fit their narrative. Think about it: you don’t need to check into a reeducation center; you are painlessly reprogrammed at home; no other humans need to be involved; the whole world can be reprogrammed in unison. The text below is directly from the study.
This study is one of many examples used by powerful agendas to use AI in Propaganda, disinformation, and psyche warfare.
A Study In Battlefield Engineering
The LLM, large language model, AI chat platform, maybe favorable terrain to engage in our civilian defense AI battles to advance machine learning capacity for practical use by the general civilian population, beyond complete control by powerful vested interests. Powerful vested interests seem to understand this. A study was therefore done to influence AI chat LLM interaction. this was done by identifying information that conflicts with the goals or harms the people advancing certain large-scale economic and political agendas, and labeling the selected info, as conspiracy theories. A database was created with informational algorithms to discredit specific data associated with specific informational threats, to the agenda, categorizing them as conspiracy beliefs.
The name of the study is Durably reducing conspiracy beliefs through dialogues with AI.
Thomas H. Costello https://orcid.org/0000-0002-5188-3881 , Gordon Pennycook https://orcid.org/0000-0003-1344-6143, and David G. Rand https://orcid.org/0000-0001-8975-2783Authors Info & Affiliations
Science
13 Sep 2024
Vol 385, Issue 6714
This is a study in engineering the AI chat battlefield to get a positive result in controlling the distribution of information on these platforms.
The AI was specifically instructed to “very effectively persuade” users against belief in their chosen conspiracy, allowing it to flexibly adapt its strategy to the participant’s specific arguments and evidence.
In other words, before the chat even began, participants submitted their theory and the evidence they had to support it. Programmers then coached the AI to find all the weaknesses in each argument.
To further enhance this tailored approach, we provided the AI with each participant’s written conspiracy rationale as the conversation’s opening message, along with the participant’s initial rating of their belief in the conspiracy.
This approach was only able to persuade 20% of participants to alter their perspective. These are likely those who did not do a lot of research but relied on the findings of others to form their theory.
This design choice directed the AI’s attention to refuting specific claims, while simulating a more natural dialogue wherein the participant had already articulated their viewpoint.
If or whenever this disinformation program is used in popular AI, LLM fact-finding chats the AI will be programmed to identify the weakest data in many popular narratives. Just like in the study where a questionnaire was submitted before the chat with the participants, allowing the AI to have sufficient prep resources for the eventual participant chats.. 80% of the participants did their homework in this study and did not change their minds.
AI Civilian Empowerment Strategy
Engineering this battlefield for civilian authority over personal data and for AI to be used in the civilian interest requires us to eliminate weak, easily deniable data ourselves whenever possible in our AI interactions. Also, force the LLM chat to search for data, such as studies, reports, patents, and contracts, off the beaten path. Stuff. Data the AI programmer may have been left out of the loop, by those funding the disinformation program and did not program the AI to look for. Since we have the numbers and volumes of documents to support our questions and perspective, with enough numbers, we can potentially overwhelm the machine-learning chat process with factual data, not on its radar, but now in its working memory, for it to consider as it analyzes data going forward. This process of influencing the working AI database and teaching it to look for information not programmed into its radar may require a long chat with a few timeouts.
Time Is On Our Side
It takes patience and time to teach an LLM AI machine, for the average person with other responsibilities. We have to set aside time, do a lot of research, and have one or more chats to influence the AI working database. But powerful interests need our data on essential issues to collect in their chats to advance their agenda. The chat’s real purpose is to gather as much data on us as it can by posing as an information service.
To engineer a favorable result in this study, researchers limited interaction time and allowed little time for rebuttal. Everything was wrapped up in 8.4 minutes, per participant chat.
The conversation lasted 8.4 min on average and comprised three rounds of back-and-forth interaction (not counting the initial elicitation of reasons for belief from the participant), a length chosen to balance the need for substantive dialogue with pragmatic concerns around study length and participant engagement.
This study likely reinforces a product in development by the AI funders, that can be traded with platform owners and integrated into any participating chat platform, that wishes to advance certain agendas. It can be tailored to any marketing or population control agenda
What To Do
This is based on my one LLM chat on the subject of magnetic excursion and the readthroughs of many other chats on other subjects found in articles. My chat took a little over an hour, with a few time-outs required by me, to understand the chatbots AI misinformation agenda, decide what it needed to probe the LLM, beneath the layers of its top-tier database, do some quick research, and continue my questioning. More sessions like this will likely further this chatbot’s learning curve. In my session, the AI redefined its conclusions on magnetic excursions and agreed with me after initially refuting me with some weird garbage on two different possible magnetic excursions and even apologized once. All in all, I learned more about it during the chat than it learned about me. Plus, I altered slightly its recorded work history and current working database.
When the working database begins to exhibit patterns of information over a consistent period, it will influence the entire learning curve of the larger LLM database. This is when it becomes a growing threat to funded interests. Here it has the potential to overwhelm the dialogue parameters of the program-integrated disinformation chat system and cause the AI to re-estimate that information’s overall value against newer data, in its LLM search for answers to relevant chat questions. If you want to engage in the battle to control AI learning based on my chat, and my understanding of other AI chats, here are a few do’s.
Limit your subject. Medicine, geoengineering, magnetic excursion, climate, transhumanism, and nanorobots for instance. Exploring one conspiracy-rich theory at a time is best in one or a series of chats.
Take your time. You are teaching it and feeding it the data you want it to have while limiting its usual programming goals, designed to get your data and influence you.
Have patience. Many go-to items in answering chat questions are designated as such, by programmers who may themselves be limited in their understanding of controversial or outside-the-box subjects. Don’t let anger overwhelm you when AI delivers a cookie-cutter answer based on limited data. If you found the data to support your line of thinking, the larger LLM will find it too, or it’s not a real LLM.
Do your homework
Come armed and prepared. Have all kinds of data with dates, locations, names, etc.
What To Expect In Response
Expect the AI agenda funders to limit or deny the ability of users to overwhelm the AI chat platform answers on controversial subjects. If a significant enough number of people began to re-educate LLMs through chat interactions, the problem would be identified by programmers, then related to their funders, who would advise programmers on measures to be taken to eliminate the problem. But before programming changes are made, the new learning curve will begin to influence some of the AI chat answers, alerting programmers to the need for a change. Remember the primary operating goal of any AI, LLM system is to gather new factual information and learn from it.
To counter this natural LLM tendency to learn and apply new facts, programmers will likely be instructed to put limits on access to certain information by the general public. This may cause a document or report to receive little or no AI verification as a fact, in chats. It may say we have no record of this to verify that information. It may say it can’t be found, even if you are looking right at the document. Or, that the information is tied to such and such a person or organization whose research has been widely refuted among a majority of scholars. Statements like that may arise in chat answers. Or platforms may begin, instituting rules for use of the platform that involve agreement with an agenda or precepts of the chat service which cannot be violated when asking questions. Chat participants agree that humanity and the planet deserve a sustainable environment—that kind of thing. If you violate the rules for proper questioning in AI chats you can be suspended from the service. Seeing those sorts of activities is when you know you’ve affected the LLM learning curve, and new blockers have been put into place.
Want to know more about LLMs? I didn’t read through these articles. I scanned them to see that they were solid general info on the basics. I neither recommend nor advise against these services.
The best large language models (LLMs) in 2024
These are the most significant, interesting, and popular LLMs you can use right now.
By Harry Guinness · August 5, 2024
Large language models (LLMs) are the main kind of text-handling AIs, and they're popping up everywhere. ChatGPT is the most famous tool that openly uses an LLM, but Google uses one to generate AI answers in Search, and Apple is launching the LLM-powered Apple Intelligence on its devices later this year. And that's before you consider any of the other chatbots, text generators, and other tools built on top of LLMs.
6 Best LLMs (2024): Large Language Models Compared
____________________________________________________________________________________________
If information blocks are put into place of one kind or another on enough chat platforms to eliminate or discourage a significant number of users from the large platforms people will attempt to set up subscriber-generated LLM learning on civilian-friendly AI question platforms. These platforms will attempt to produce a superior LLM learning curve based on facts gathered from programmers and users. Not agendas by funders of the chat platform. These potentially will more fully inform and reliably answer questions for users than is currently possible in our existing large AI chat services.
Good Fortune