ARTIFICIAL INTELLIGENCE & SOCIETY
Omar Marcos
May 26 - June 8, 2023
READ TIME: 55 minutes - 1 hour 10 minutes
TABLE OF CONTENTS
- 1) DEFINITION OF TERMS
- 2) INPUT
- 3) OUTPUT
- 4) A.I. DECISION-MAKING
- 5) LIABILITY, SECURITY, & PRIVACY
- 6) PROHIBITED & NOT-RECOMMENDED USES
- 7) JOB LOSSES & JOB TRAINING
- 8) ARTISTS & CREATIVES
- ADDENDUM: A.I. LEGISLATION & PENDING LAWSUITS
$ python3 hello.py Hello World!
Above is just a snippet of sample code in the language often used to develop artificial intelligence (AI) systems. But you don't need to know how to code in order to discuss the potentially detrimental consequences that AI can bring. (Many developers themselves don't even completely understand how their own software's algorithms are coming up with the solutions they output, and they've only recently started to delve into the "black box" of their neural networks.) I heard that Congress is just beginning to hold hearings on this subject, the European Union has already drafted an initial proposal to regulate artificial intelligence, and in places like Japan, one liberal minister recently claimed it was basically okay to use any material for AI software training! (As reported in a June 5, 2023 Petapixel article.)
Granted, I'm only a basic web site coder, and it may seem a bit presumptuous to try to tackle all the corresponding issues involved. But it's slightly concerning how slowly any relevant legislation seems to be coming together, all while this technology continues to advance at an exponentially alarming rate. So allow me to offer up a few straightforward guidelines for addressing the behemoth that is artificial intelligence. Because AI has already said "Hello" and now it's up to us to respond appropriately.
(Since this article has grown into one of unexpected length, I've added a linked table of contents above for readers to be able to find the specific section they're interested in & simply click to be redirected there. And as you can see, for increased clarity and easier reading I've also added titles to the recommendations in each section. For the main body of the article, I abbreviate artificial intelligence simply with the capital letters "AI". But for section headers & subheaders, I use "A.I." in order to prevent confusion with the delineation of the titles.)
1) DEFINITION OF TERMS
Whenever anyone sets out to place limits or restrictions on the workings of a certain field, it's a given that corresponding terms need to be defined first. I made the assumption that this was already implied within the process, but perhaps that was a mistake on my part. So I'm squeezing this in at the onset of this article, and moving back the subsequent sections.
It would take far too long to delve into a historical explanation of the birth & progression of artificial intelligence, and there are others much better qualified to do that than me. Besides, from what I've heard, the beginnings of artificial intelligence in the U.S. had its roots way back in the middle of the 20th century, just a bit before my time! Of course, by the time we've agreed on a definition of terms for the broad field of artificial intelligence, there's the likelihood that a new offshoot of the technology will suddenly sprout up and create the need for new sets of terminology & such. But that's the thing about regulating technology: due to its rapid pace of change, corresponding laws and regulations need to be constantly revisited and revised accordingly, in order to keep up with the times. Otherwise we'll find ourselves fighting yesteryear's battles while potentially harmful tech goes unchecked.
You'll notice that I won't attempt to define terms in the following guidelines. Since that would appear to be a little arrogant considering my stated (lack of) qualifications above, I'll only provide a few thoughts on what to look out for.
A) MISNOMERS & MARKETING - Apple has been taking a more subtle approach, using phrases like "machine-learning" or "on-device learning" when mentioning the presence of artificial intelligence in their products or devices. And of course there's nothing inherently wrong with that. But legislators need to realize that not every tech company will patently slap the term "AI" or "artificial intelligence" on their software or products whenever such technology is in use! And I suspect that if regulations on AI ever become more stringent, tech companies will increasingly try to disguise their work with clever names or marketing ploys. ( * Added 7/13/23: If you're familiar with Apple products, you're aware of the fact that they've been touting the benefits of their M1 and subsequent M2 chips for Macs for a few years now. These promoted benefits include better speed, performance, and suitability for machine-learning uses. Tech journalist Benj Edwards covers Apple's phrasing preferences pretty extensively in a June 5, 2023 Ars Technica article regarding their recent keynote presentations.)
B) WIDESPREAD USE OF A.I. - As I already mentioned, a degree of artificial intelligence has already been in existence for some time now, and with the almost universal use of smartphones nowadays, AI has been steadily becoming a more integral part of our communicating devices. While some of this appears to benefit users, part of it obviously serves to add to the continued training of the software in question, and part of it is just another way to develop a more total customer dataset to sell to the advertisers. Microsoft is reportedly baking in a considerable amount of artificial intelligence into their Windows operating system and there are a number of smartphone apps that already incorporate AI in their offerings.
My point is that artificial intelligence has been making inroads in a number of our products, operating systems, and tech gadgets. Anyone interested in understanding and consequently placing limits on this technology needs to realize that its use isn't limited to the applications & products I mentioned in the previous paragraph. From medical devices to cameras to automobiles to children's toys to kitchen & home appliances, many data-hungry corporations are incredibly eager to implement AI anywhere it can assist them in building profitable customer profiles. The privacy implications are staggering and, in some cases, the potential for data breaches is just as mind-boggling.
So knowing that the reach of artificial intelligence goes well beyond just our desktop computers or laptops, the task of Congress and other legislators in regulating AI truly is a tall order indeed.
2) INPUT
Now let's cover some of the methods and sources AI developers have used for software training and input. As I've previously mentioned, since computers aren't creative, many of these AI systems need to be trained using the original creations of humans, which delves into the territory of copyright infringement. I realize there are legal experts who state that you can't copyright something as nebulous as a specific artistic "style", and I'm not going to pretend to be a intellectual property lawyer in that regard. However, the U.S. Copyright Act of the 1970s clearly states that, among other rights, a creator ∕ copyright owner "has the exclusive rights to …prepare derivative works based on the copyrighted work." Note that word "exclusive". That typically means that no one else apart from the creator can prepare content based on the original work, at least not without explicit permission. It's obvious that existing copyright laws in the U.S. protect creators from having unauthorized derivative works made from their creations. Yet consider the fact that it was reported that the logo of stock photography giant Getty was showing up in some of the image output from certain artificial intelligence software and multiple artists & photographers have also noted specific elements of their work in the same. Obviously, many of these AI developers had no qualms about swiping the intellectual property of human creators to inform their software, often for the purposes of generating their own profit from the work of others. (Midjourney's CEO David Holz even brazenly admitted to scraping content from the web in an interview with Forbes by Rob Salkowitz in September of 2022.)
And perhaps you've already heard the AI imitations of musicians in the field of pop music. As far as I know, these artists didn't grant these developers permission to train their software using their voice, and yet we already have AI generating new songs that imitate specific singers. For vocalists, their voice is their instrument, and in many cases their voice is what sells albums. It's clear that the behavior of some of these AI companies will result in loss of business income and decreased demand for a number of musicians & vocalists. (And yet, the U.S. government doesn't seem to be enforcing the current copyright laws already in place. Really interesting…) Bravo to record labels like Universal Music Group who continue to stand up for the inherent rights of their artists against the AI tech companies!
For all the claims from certain AI developers that they want to be regulated, it's perhaps most enlightening that a few of them cry foul when one regulatory agency actually proposes legitimate rules & restrictions on the corresponding tech companies. It sort of makes you wonder if 1) the tech industry's stated desire for regulation is really sincere; 2) the tech developers are only presenting a deceptive front while they continue to press forward in developing their AI systems in order to make a small fortune selling them to an eager corporate world.
A) MAKE A.I. TRAINING SOURCES PUBLIC - What sources did the AI tech companies use to train their systems? Every single AI company should make their sources public. And there's no way they can really consider that proprietary information, since apparently many of the sources they used came from the public sphere anyway.
B) ACCOUNTABILITY FOR COPYRIGHT VIOLATIONS - Did AI developers violate privacy laws and scoop up personally identifiable medical information and health records? Did they infringe on existing copyright laws in the U.S. and use the work of writers, journalists, artists, and musicians to inform their AI software, without permission and without financial compensation? If so, there needs to be corresponding financial and criminal penalties. And it would be a good idea to enact updated laws to prevent unauthorized use of copyrighted materials or private information in any future AI training rounds.
C) ACCOUNTABILITY FOR RESEARCH FIRMS - What organizations provided the AI companies with the databases or web links for training their software? If the terms of use or contracts of these organizations stated the information was only to be used for research purposes without monetary gain, why aren't they demanding accountability from those who clearly breached these terms? Even if some of these AI companies may be engaging in a kind of shell game of non-profit company names, there's no excuse for the former organizations to continue to supply them with the aforementioned databases. Again, there should be penalties for organizations who insist on handing over such information to AI developers who misuse research.
D) PROVIDE METHODS FOR CREATORS TO BLOCK SCRAPING - As I already proposed in section 2 of my first AI article here on January 16, 2023, artificial intelligence companies seeking data for their software training should not be allowed to use any creative works (including but not limited to artwork, photography, design, creative writing, scripts, music, journalistic articles) without explicit consent from the corresponding human authors. Inclusion in AI training datasets should be opt-in, not opt-out after the fact. One way to do this is to require AI companies or dataset libraries such as LAION-5B to publicly provide the IP address or IP range used for bot crawling & database scraping, so artists can easily & quickly block access to their own websites and online copyrighted material.
On August 7, 2023, OpenAI actually revealed the name of their crawling bot, as well as a limited IP range they use for data scraping. While this clears the way for creators, writers, & site administrators to restrict access to their websites for the training of the upcoming GPT-5 model, the question remains as to why OpenAI didn't provide this information years ago at the onset of its internet scraping processes. (And didn't OpenAI's CEO Sam Altman emphatically state a few months ago that there were no plans to develop a model after GPT-4? Yet here we are, apparently on the verge of another round of dataset training for a newer AI model…) Unfortunately, many have lost all trust & confidence in the various AI companies who have already misappropriated copyrighted work for their own corporate profit. Sharing one IP range and the name of (just one) of their bot crawlers doesn't excuse or undo all the harm that has already been done. However, for those creatives who wish to prevent their work from being scraped in the upcoming round of training for GPT-5 (…and I strongly recommend that you consider taking preventive steps), I'm providing a brief summary on how to do this in the revised part E of the Artists & Creatives section.
Before moving on to the other points, allow me to insert a quote from my initial AI article proposal on January 16, 2023: "It's common knowledge that spiders and bots often disregard Robots files & HTML header tags on individual websites, so perhaps the World Wide Web Consortium can propose another standard that must be followed. Or maybe a brilliant developer out there is already working on an all-stop code against AI-associated bots and mechanisms for websites. My temporary solution would be to require all Artificial Intelligence companies to provide the IP address they use for scraping in order for artists & others to add it to their permanent IP blocklist." It's a great step (albeit very late) that OpenAI has made the previous information public. (And sometimes for various reasons, IP addresses do change. Will OpenAI & any other organization that takes these steps be immediately forthcoming whenever there are changes? And how can we be sure that this is a full list of all the bots and IP addresses they use?…) However, consider the following:
1) Again, Robots.txt files are routinely ignored by bots, spiders, and web crawlers alike. Adding a litany of bot names to your Robots.txt file is no guarantee that your website won't be scraped by those creepy crawlers. Which is why I already hinted that there needs to be a new standard to address the specific concerns around AI & dataset building. Google is reportedly looking to reform or replace the core of Robots.txt and is supposedly looking for public input in this regard. So all you brilliant developers & creatives out there, perhaps now is the time to chime in. (…that is, if you actually trust Google with this.)
2) Secondly, the IP range & bot name identified by OpenAI is only a fraction of the aforementioned channels in use by the numerous AI companies and dataset libraries out there. What about all the other AI-associated organizations likely involved in scraping data? Hello, Stability AI? Midjourney? Why haven't they & a host of other companies provided the same resources for creative to restrict access to their copyrighted work? It would be a demonstration of good faith if all AI companies provided the IP addresses they use for scraping. Fortunately, we're still in the early stages of this advanced stage of AI and keeping up with such a list would be fairly manageable for creatives at this point. But as the number of organizations increases exponentially, can you imagine what a Herculean task it would be to keep track of hundreds or even thousands of coresponding IP addresses on a regular basis? Unfortunately, these are the steps artists have to take to protect their work until Congress gets involved or the courts address the legality / illegality of data harvesting of copyrighted work by AI developers.
3) OUTPUT
It's really convenient to be able to type out a simple text prompt and receive almost instant output, whether it's a nearly finished animation, a completed essay, or a ready-made marketing video. Many businesses are already apparently drooling at the chance to slash employee rosters, reduce their health insurance and payroll overhead, and cut their production times by days or even weeks. And I'm starting to wonder if soulless, machine-generated content is ironically the idyllic match for our ever-shallow, always on-demand society.
But companies and individuals alike need to realize that these artificial intelligence systems are, first of all, far from infallible. Among a host of numerous other examples, there have been instances of AI software slandering individuals with outrageous accusations, chatbots that have "hallucinated" and falsified news articles & information, early stages of a chatbot that threatened people or became otherwise adversarial, and AI image output with distorted limbs & faces or sexually explicit material. Then there's the increasing spread of potentially devastating deep fake videos such as the recent contrived explosion that was supposed to have taken place at the Pentagon. (…note that stocks took a brief plunge that day when the misleading video came out.) In line with the poignant question posed by professor Hany Farid, when nearly anything can be digitally concocted, how will people be able to trust the veracity of anything?
A few have proposed that output from generative AI should be embedded digitally in order for search engines or social media networks to be able to determine the source of works. Some have even recommended Adobe's CAI (Content Authority Initiative) as a next-step solution to the avalanche of unknown content. If you've never heard of it before, the CAI supports the C2PA's standard for including considerably revealing metadata within images and videos. The metadata would include information such as the type of editing software used, what type of editing steps were taken, which camera or smartphone was used for image creation, and even the location detailing where a video was shot, as well as other information about an author. This initiative was begun between 2019 and 2021 by the likes of Adobe, the New York Times, Microsoft, and by the previous Twitter management. (Perhaps that should start to give you an idea behind the initiative's motives…) It's no secret that Adobe has done some incredibly unscrupulous things in its history. It's true, they gave us the wonder of Photoshop. But we can't forget behavior such as making the Flash player settings manager appear like a static image instead of an interactive window so people had difficulty deleting Flash cookies & local storage online. Or intentionally configuring Adobe Reader PDFs so that antivirus software wouldn't be able to scan them, as reported by Minerva Labs, bleepingcomputer, & ghacks.net. (…and that took place in 2022!) Some of these companies have long since violated the public's trust, and as a result, they have little collateral left to their credit. So I will not support initiatives by tech companies or governments that would seek to track, locate, and silence dissidents who operate within the realm of constitutionally-protected free speech, even when it's supposedly under the guise of weeding out "disinformation".
A) REINFORCE COPYRIGHT LAW - I applaud the U.S. Copyright Office for declaring in March of this year that output from artificial intelligence cannot be copyrighted, since according to the corresponding definition of authorship & copyright, only work by humans can be registered. But it would be extremely beneficial to creatives to have legislation that clearly reinforces the human concept of authorship.
B) BUSINESSES SHOULD RECOGNIZE LEGAL RAMIFICATIONS OF A.I. CONTENT - Consequently, any brand that incorporates output from unethically-sourced AI engines faces a few noted dilemmas: Not only is the output very likely not copyrightable, but in my opinion you also run the risk of potential infringement lawsuits if creatives find plagiarized elements of their own work in what the AI systems generated.
C) REGULATE A.I. USE IN EDUCATION - With all the known flaws, errors, and obvious bias within the output of generative AI systems, do we really want to incorporate them deeply into learning environments, especially at the preschool, grade school, and junior-high levels? Relevant regulations & legislation should be finalized quickly, as I suspect this is one area where AI organizations will attempt to introduce their systems in a very subtle, yet decisive manner. History has shown us that tech companies often seek to target the youth of society first, either to impress their own values on them or to immediately get them hooked on the latest gadgets or apps. (Anyone else wonder why Meta (Facebook) wants users to spend more time in the "metaverse" & progressively increase their use of virtual reality headsets to do so? As far as I know, parents and guardians usually can't see what their kids are watching or playing while using these headsets. But I suppose if you trust Meta, then I guess that's no cause for concern…)
D) FLAGGING A.I. FOR SOCIAL MEDIA - If embedding content from generative AI systems with very basic digital flags or digital watermarks makes it easier for social media companies to find harmful content, then perhaps it should be considered. (But who defines what's considered "harmful content" or "disinformation"?) Even so, we should all be wary of any movement or initiative that shifts our culture into a more authoritarian and less free society, such as proposals to automatically embed any created work with invasive metadata containing inordinate amounts of personal information.
E) WATERMARK ALL A.I. CONTENT - While this may seem a radical concept for some, I believe AI developers should also be required to visibly or audibly watermark all output from AI technology. Digital watermarks may help social media companies or news organizations detect AI work, but what about the general public? How else will they be able to fully evaluate content they may come across on TV, on streaming platforms, or even multimedia content that may circulate through e-mail, unless artificially contrived media is clearly designated as such? CAI and C2PA want to mark all creative output with too much metadata; instead, digitally (and visibly) mark all the AI output with the intended obtrusive amounts of identifying information.
Recently, various outlets have reported that Google has been developing imperceptible digital watermarks, supposedly for the purpose of adding them to AI-generated content. Yet I'm not sure why Google & other companies are trying to reinvent the wheel, so to speak. Companies such as Digimarc successfully created a similar system for images years ago, with watermarks that were invisible to the human eye and that would also survive a few generations of editing. They even provided a Photoshop plugin that allowed you to read an incoming image and then view the creator information embedded within. It's just a little strange how some of the technology we can use to flag AI content or help verify the authorship/copyright status of a work has been around for some time now. Yet Midjourney's David Holz claimed there was no way to verify ownership of content his company scraped, and Adobe is determined to develop a new form of invasive metadata that can compromise an individual's privacy. Is it a stall tactic designed to satisfy public outcry while the tech companies continue with their unethical web scraping & dizzying output of AI-generated content? (Added 9/12/23)
F) CREATORS CAN ALSO MARK THEIR CONTENT - Accordingly, federal agencies, news organizations, financial institutions, artists, and creators should likely also get into the habit of watermarking all of their images & videos and linking such media back to their websites. This should hold true especially for those organizations that deal with sensitive information that can influence stock trading, cause health scares, or otherwise severely negatively impact public perception on important subjects. Of course, watermarks can be copied or manipulated, but at least this can help reduce the potential impact of deep fakes and such. When the public sees a stray video without a watermark circulating on social media, they'll be less likely to trust it if it doesn't come from a recognizable source. And when people see a watermarked image or video they want to verify, they should be able to trace it back to a specific page from the corresponding organization using an HTML or image link.
[Even as I was writing this, Adobe debuted the Generative Fill AI feature in its Firefly beta program. And unfortunately, someone discovered people can use this artificial intelligence to easily remove basic watermarks from images by stock photography company Getty. So until I look at this further, your image watermarks have to be at least more substantive than the simple bar & text over an image like Getty uses. Adobe again!! Why in the world they would introduce software like this that can easily enable image piracy is beyond me…]
4) A.I. DECISION-MAKING
We already know that artificial intelligence systems can make significant mistakes, and no matter how much the technology improves, we should never look to it as our source of truth or allow anyone to deify its output or design.
Sure, there are some interesting developments in progress. Anthropic reportedly embedded their Claude AI with a sense of ingrained ethics, a "Constitutional AI" as they call it. Elon Musk is reportedly working on an AI project to differentiate from the present options. Tusk rolled out a conservative-leaning chatbot called GIPPR. But whatever side of the political spectrum you're on, it's important to realize that artificial intelligence will only be as "ethical" as the people developing it and only as trustworthy as the sources of information that are fed into it. It's clear that some of these tech companies are run by progressives with a clear political bias. Midjourney's CEO David Holz infamously prohibited the software from lampooning Chinese Communist leader Xi. And just as I was wrapping this up, I found out that Sam Altman's OpenAi effectively shut down conservative chatbot GIPPR. I wonder, was that done for fear GIPPR might spread what the left deems to be "misinformation"? Was it shut down out of an intent to suppress an alternative that goes against much of the left-leaning information that ChatGPR has been endowed with? Google's Sundar Pichai makes the argument for "pro-innovation frameworks…based on shared values and goals" when it comes to AI regulation. I wonder, whose values and whose goals are those in the tech industry really committed to supporting? Are they really looking out for the best interests of the common people? Or are they simply determined to promote their own financial interests and whatever can benefit their preferred political party? Perhaps Geoffrey Hinton's decision to leave Google & the AI branch there earlier this year gives us all the insight we need to these questions. Perhaps Google and many of the other progressive AI companies are on paths that will eventually harm people with this technology.
Before moving on to the corresponding recommendations, let me say this: Artificial intelligence doesn't have a conscience, and contrary to the opinion that all this is the next stage of our human evolution, AI has no soul (and will never have one). As such, humans should be very careful how much authority and freedom we allow to any artificial intelligence systems.
A) HUMAN AUTHORITY - Humans should always review and should always have the final say over any decisions made by artificial intelligence. Yet knowing that some businesses are implementing AI in their workflows specifically to eliminate the human factor (and supposedly reduce costs), this action point may unfortunately fall on deaf ears. I wonder, why would we ever grant human-made systems dominance over humans?…
B) HUMAN AUTONOMY - The General Secretary of the European Trade Union Confederation, Esther Lynch, wants assurances that "no worker is subject to the will of a machine", and I couldn't agree more with such an insightful and wise statement (as reported by Agence France-Presse & Fox News). Echoing my sentiments in the previous point, humans should not be ruled by soulless software, especially when such systems are plagued with obvious bias and prone to obvious errors.
C) TRANSPARENCY IN LAW ENFORCEMENT - It was reported that some law enforcement didn't even state that they had used AI technology in some court cases, which is completely unacceptable. Every time police or other law enforcement agencies employ artificial intelligence to apprehend a suspect or prove a case, they should be required by law to disclose when they used such technology. Then the inherent fallacies or shortcomings of such software can be examined openly in each case.
D) A.I. USE BY RISING GRADS - Statistics show that a number of college students are increasingly using options such as ChatGPT for their studies, and a few even allow the tech to write their essays and such. One woman who founded an AI business that assists aspiring college students in preparing their college applications claimed that the students who don't use such software will be at a distinct disadvantage compared with those who do. (No conflict of interest at all in her opinion, LOL!…)
Seems to me college applicants have been doing a pretty good job on their own all these years without the assistance of AI. And humans have been coming up with some fairly ground-breaking and eloquent writing apart from artificial intelligence for some time now. (Needless to say, a lot of those papers written by humans have probably been unethically fed into the AI systems during their training processes…) It's disconcerting how there's a concerted push for the rising generation to abandon creative thought & critical thinking at the hands of AI.
To all the students out there, I say this: You've been endowed with a distinctive voice and perspective, the culmination of all your life experiences, socialization, and the influence of your culture & heritage. Everything inside of you is part of what makes you unique in this life. Sure, using a chatbot to write your papers or do your homework may be incredibly convenient and quick. But contrary to the falsehood stated by one of Meta's executives, you don't learn as much (if at all) when the computer does almost all the work for you. And critical thinking is one key to a successful future. So why would you throw away the chance to develop skills that can help ensure your success? And why would you ever freely give up your unique voice in this world and let a piece of software speak for you?
E) A.I. SAFEGUARDS & OUR FREE WILL - The other day I came across an op-ed in a major news outlet from someone who's apparently a proponent of AI systems. He argued that at some point, AI will likely reach a level of reasoning comparable with or similar to that of humans. (…and to some extent I believe that's the ultimate goal of a sizable portion of the tech establishment, despite their denials). However, this proponent went on to recommend that humans should never even install a "kill switch" safeguard against such systems, for fear that this could immediately initiate some kind of retribution from the artificial generative intelligence systems! He also went on to suggest that when the time comes, we should simply learn to co-exist and "cooperate" with such intelligent (and I would add, emotionally frail) systems!
It's errant thinking like this that motivates me to spell out better ways to address AI in its various forms. In light of the recent advances in technology, many are simply shrugging their shoulders in dismay & resigning to the statement that artificial intelligence is the future and there's not much we can do about it. Yet God has given each one of us free will and the power of choice. The future of AI is not set in stone just yet, and we each have the power to contribute positively (or negatively) to the conversation, as well as directly or indirectly to its development. If we as a society set the appropriate limits to AI technology and use it in a way that benefits humanity, then bravo to us for our restraint and foresight. However, if we ever reach the point where our own technological developments & advancements cause our downfall and become potential sources of devastating harm to us, then we'll have no one to blame but ourselves.
5) LIABILITY, SECURITY, & PRIVACY
On April 13, 2023, Matt Burgess wrote an article for Wired magazine titled "The Hacking of ChatGPT is Just Getting Started", which summarized how one security researcher was able to easily jailbreak these systems in only about 4 hours! His attempts liberated (so to speak) the software from having to follow its original parameters and restrictions. As the article then expands upon, imagine what would happen if a malicious hacker replicates this or a similar prompt-injection attack on an AI program that's being employed by thousands or even millions of people. A program that has access to workers' business data, communications, or even patients' private medical information. The devastating consequences of such a cyberattack would be almost unimaginable. Consider the recent hacking of the widely used MOVEit file transfer software in June 2023, which resulted in the compromise of millions of instances of customer info across both the private and public sector. My concern is that an AI software that can write its own code such as the brilliant "Wolverine" specimen that can "self-heal" its own bugs & errors could somehow go on to elaborate its coding and processes in a way that could be extremely malicious. And yet some of the AI industry's proposed solutions to these jailbreak attempts include using more AI to identify and deal with it! I really hope the so-called solution doesn't end up becoming the problem one day.
Police in New York are already employing robotic "dogs" on the streets of the city, and it leads me to wonder what would happen if some tech-savvy criminals manage to hack into these or similar robotic assistance units. Would they enjoy the advantage of having eyes and ears on potential hits & crime targets, courtesy of the police department and the AI companies? And is this already replacing the jobs of some officers in certain police departments?
For all my criticisms about AI, there's no denying that it can be used for good purposes by those with honorable intentions. One medical center brilliantly implemented AI to help vocalize the thoughts of those without the ability to speak! However, note that the personalized training of the software required the patient to spend roughly 12 hours in a CAT scanner. Doesn't that expose the patient to a considerable amount of radiation in one sitting?! As I briefly mention in action point D below, organizations need to clearly inform their customers of all the risks involved with this new technology and corresponding processes.
A) PENALTIES FOR IDENTITY THIEVES & SCAMMERS - Both state and the federal government clearly need to ramp up the penalties for identity thieves and scammers who use artificial intelligence technology to swindle people from their money or malign their reputation with falsified sextortion imagery. The time to do this was yesterday, and yet our elected representatives continue to lag behind with the proper regulation. And all the while, vulnerable senior citizens or innocent women fall prey to the most sophisticated attacks by tech-savvy criminals. I realize some of our leaders are chomping at the bit to see how they can reap financial rewards from the artificial intelligence industry, but if you'll recall, you were elected to represent the citizens and look out for their interests. And if you have trouble remembering that, then the electorate can always help you refresh your memory come next election cycle and vote you out of your posh political positions.
B) LIABILITY FOR A.I. PROBLEMS - When artificial intelligence systems & software used by a business go wrong, as they undoubtedly will in various instances, and those deviations cause damage to a person's reputation or result in loss of customer data, etc., liability should be assigned first to the business ∕ organization that chose to employ the software, and if the fault lies within the core design or tendencies of the AI, developers should also be held legally and financially accountable. We can't give blanket protections to these tech companies similar to the ones Congress gave the social media organizations some time ago.
C) LIABILITY IN A.I. LAW ENFORCEMENT & MILITARY APPLICATIONS - If there are problems with a more simple brick & mortar business, there's not always the likelihood that lives will be lost or people will be maimed. However, in the fields of law enforcement and military, life & death decisions are made every day which can end up sending individuals to prison for a long time or can demand the ultimate price from an individual. I read that some armed forces abroad are already using AI for threat assessment analysis outside of simulation environments. So my question is, what happens when artificial intelligence systems employed by law enforcement or the military get it wrong? Who bears the responsibility and legal penalties for a blameless life lost or for an innocent person incarcerated? Are there checks and balances in place to ensure tragic mistakes are avoided? Do humans have the final say (as they should) over all life & death decisions? As a society, we shouldn't just simply wash our hands in apathy or turn a blind eye to any life-altering errors committed by AI systems, especially in these fields. We must do all we can to prevent respective tragedies, for if it turns out this technology results in a tidal wave of innocents lost, the land would not bear up under such injustice.
D) TRANSPARENCY IN PRIVACY ISSUES - Privacy is an incredibly broad topic that can't truly be covered in just a few short posts. For now, I'll simply state that businesses need to clearly inform their customers when privacy-invading AI technology is being used on them. Whether it's facial or biometric scanning or voice recognition, customers should be made aware of their usage. (Anyone wonder if the artificial intelligence order takers at certain fast food restaurants are recording customer's voices for future dataset training? If so, are customers informed of that and given the option to opt out? Who else would be given access to such a dataset?…) And if new processes involved with AI can result in collateral instances of harm to a patient or customer, they need to be clearly informed of such risks beforehand.
E) A.I. TECH MONOPOLIES - Even with all that has transpired in the field of AI in the past year or so, it's still fairly early in the technological & business landscape. However, as time progresses and if certain players such as Google and Microsoft become unacceptably dominant, there may be a need to break up certain monopolies that could seek to quash competing rivals in the industry.
6) PROHIBITED & NOT-RECOMMENDED USES
The European Union's initial proposals to seek to identify aritifical intelligence based on the threat levels they pose to society is a great start to addressing and categorizing the potential ills of the technology. However, it's just a little concerning that many seem reluctant to actually prohibit certain AI. applications in society.
Select Whole Foods stores have reportedly already placed biometric readers & scanners in their stores to supposedly prevent theft & product loss and law enforcement is increasingly using drones (often made in China) to assist in their daily work. But whether businesses and organizations employ new technology that incorporates AI or not, the component of public discussion & input often seems to be missing from the implementation process. And as Jeff Goldblum's mathematician character in the "Jurassic Park" movie wisely stated, "Just because you can, doesn't mean you should." (paraphrased)
Communist China is already known for employing cameras and facial recognition technology for social control and for assisting in singling out those who may possibly oppose their government in any way. And I'm starting to wonder if the left-leaning tech executives here in America seek to replicate these methods of mass population control here, all with the assistance of artificial intelligence. Fox News reported on a scientific study conducted in Denmark that revealed it's possible to determine a person's political leanings using AI facial recognition technology! I don't know of anyone using the tech in this capacity here in the U.S. (at least not yet), but these recent software advancements have the potential of placing us just a stone's throw away from increased authoritarianism through mass surveillance and the singling out of supposed dissidents based on their political preferences. Not a good place to be for any nation that wishes to remain a free and just society.
A) A.I. IN ELECTIONS - Artificial Intelligence technology or machine-learning systems should never be used to count vote ballots in elections. Any third party company that provides ballot counting machines should be required to undergo inspections & software code evaluations at any time to ensure they're complying with this. Public confidence in our elections hinges on our willingness to keep all aspects of the voting process open and transparent.
B) A.I. IN THE MEDICAL PROFESSIONS - Use of AI in the medical industry can be a double-edged sword. True, it can assist in some life-changing endeavors, but considering the fact that we consider a patient's medical records to be highly protected information, artificial intelligence applications should only be used selectively. The National Desk's Fact Check Team recently reported on a well-known fact that a number of systems employed by the medical industry are already insecure and prone to hacking & ransomware attacks. So how can we throw another volatile element into the mix when we haven't even patched the vulnerabilities in the current technology? And if you've had any experience with the medical industry, you know that even human health professionals can make careless mistakes and provide bad medical advice, or even err in a way that can cost a person's life. But may God have mercy on us as a society if we ever allow machines and software to dictate whether an individual lives or dies.
C) THE PROSPECT OF PROLIFERATION IN MILITARY USE - Kudos to those members of Congress who oppose the use of artificial intelligence to launch nuclear strikes. However, I wonder if certain other countries would make the same commitment. And what about fringe or terrorist groups? Would they have the same qualms about launching a biological or nuclear weapon through AI? Not likely, as long as they can achieve increased accuracy and maximum potency & damage, all accomplished with less manpower and overhead costs. All of which artificial intelligence can likely provide for them. For those who doubt the possibility of an AI-induced extinction-level event, there's one generic scenario right there for you. Most people don't quite grasp how dangerous this Pandora's box wrought by the tech companies really is. Either that, or they're living in some serious denial. Or they're publicly downplaying the dangers in order to buy time to make money using this technology.
D) A.I. IN THE CHURCH - There are reports of churches & organizations already beginning to use AI in various ways pertaining to faith. Some churches have allowed an artificially simulated figure to give a sermon to their congregation and one company even introduced a Jesus chatbot for people to "converse with". (…How interesting and how deceptive. The standard is "All Scripture is God-breathed and is useful for teaching, rebuking, correcting, and training in righteousness," as found in II Timothy 3:16 (NIV; my italics added for emphasis). Are some people now implying that AI-chatbots can have the same authority and wisdom as the Bible? II Peter 1:3 states Christians have already been given "everything we need for life and godliness" (NIV), yet people keep insisting on presenting us with supposedly indispensable tools for our walk of faith. Not to dismiss the role of solid teaching or even positive entertainment mediums for believers, but I'll let you in on a little secret: you can do without most of these marketed religious tools.) And all this leaves me wondering if believers are making the best use of this relatively new technology. There was a movement several years ago to incorporate business-like management practices in the church, and while I'll admit sometimes churches could be more efficient in their processes, not everything the church does should be modeled according to strict time or pattern models. Back then, some church leaders would use these management theories as an excuse to spend less time caring for the core part of their flock, the everyday church members who made up a majority of their congregation. But Jesus spent a whole lot of time ministering to the masses and those the world might deem "unimportant people", even though some would consider that inefficient. And we are supposed to be striving to be more like Jesus, right?
Along those same lines, employing direct & logical AI chatbots to come up with our messaging can effectively short-circuit the leading of the Spirit in certain areas. There are countless examples in the Bible of believers who took steps of faith and obedience that didn't necessarily make sense to them (or to us) with our limited human understanding: Abraham who was directed to leave his beloved relatives & native country to go to a Promised Land that was unknown to him; Moses who was sent to Egypt (one of the most powerful nations on earth at that time) with only the staff in his hand & the help of Aaron to deliver his enslaved people from Pharaoh's grip; the prophet Jeremiah who was commanded to buy a parcel of land in his besieged & doomed city as a testament that one day God would restore that land and buying & selling would again resume there; Matthew who gave up a lucrative living as a tax collector to wholeheartedly follow Jesus; Joseph of Arimathea, the rich man who risked his reputation to provide the tomb (temporary though it was) for Jesus at his death; the apostle Paul who would get up again & again to keep spreading the word about Jesus in different cities, even though he knew it would mean continued persecutions, beatings, and disrepute for him; and Philip who, upon the leading of God's Spirit, left a bona-fide spiritual revival in one city to save the soul of one single person out in the middle of the desert. (I always wonder what kind of impact that single Ethiopian government official went on to have after his conversion…) See, we can't pattern everything in the church according to the world's models, because we're people of faith and we're supposed to answer to a higher authority. When we let machines and artificial intelligence take the lead or dictate the conversations, we have a tendency to shut out that tugging in our hearts that things should be done in a different way. God's way.
Look, I'm not the Pope, and I'm not even a pastor. But as a simple layperson, I would advise believers against blindly implementing principles that originate outside the church, and when incorporating technology in church environments, I would admonish believers to still leave room for God to work and inspire.
E) A.I. CANNOT REPLACE HUMAN JUDGES - I've been wanting to post this for a few weeks now, and I was reminded of it the other day when I saw a headline stating that artificial intelligence was given a place alongside human judges in deciding the winners of a contest. Granted, a contest doesn't involve life or death decisions, but perhaps I should reiterate that it would be a sad day for our society if we ever come to the point where software & machines become arbiters of right and wrong in serious matters. Hiring practices and even movie casting are already poised to become impacted by artificial intelligence, but keep in mind that technology cannot discern the ethical nuances of a situation or even fully evaluate intangibles of human character, such as courage, kindness, remorse, and unstoppable persistence.
F) A.I. IN GOVERNMENT - There obviously needs to be limitations on how the U.S. government can use artificial intelligence, especially when it comes to attempts to exact social control or quell so-called "disinformation". Unfortunately, the FBI has already been known for diving into DMV databases to obtain info such as driver's license photos for facial recognition use, and both the FBI & the military's DNI have been involved in various surveillance endeavors, at times with falsified evidence or other times without the proper legal backing of a warrant. So regulations should be set in place now before any branch of government abuses this new technology to the detriment of our Constitutionally-protected freedoms.
7) JOB LOSSES & JOB TRAINING
Shortly after I commented on artificial intelligence in another forum recently, I read AI expert Ben Goertzel's remark that artificial intelligence has to potential to replace nearly 80% of the jobs currently performed by humans. With all the statistics out there (…and some of the obvious lies & deception from individuals with a clear conflict of interest), this figure by Goertzel seems to me to be much more accurate than the others. Just a few months ago, one article stated an industry such as construction would be more or less "safe" from job displacement by artificial intelligence. Yet just a few days ago, I heard that a company had recently been given a multi-million contract to convert construction equipment into semi-autonomous machines! (And there begins the loss of AI immunity for manual labor fields…)
The other day I saw a young woman on a TV program, relating how she had found her dream career. Her joy and ecstasy were almost hard to contain, and you could see the excitement in her eyes as she explained how she would learn new things each day and go on to apply them on the job. But then I realized that this minority woman's career of choice was one that will reportedly be nearly overtaken by artificial intelligence in a few short years. How many other people like this beautiful young lady will have their lives devastated by the introduction of AI into the workplace? How many minorities, who progressives claim to love and stand for, will be stripped not only of their jobs, but of their beloved careers? How many single moms, teens, college students, and senior citizens will suffer the same fate? And if people think the mental health crisis that society experienced after 2 or 3 years of pandemic lockdowns was severe, what would happen to the emotional & mental stability of the general population when their jobs are given over to AI on a permanent basis?
I'll repeat my words from one of my recent posts in another forum: "…Which is why I have almost no respect at all for the CEOs of these AI companies, and all their advanced degrees mean nothing to me: for all their technological prowess, they seem to have very little good sense. I repeat, can they not see the impending economic chaos that would result on account of mass layoffs & unemployment? Or do they actually want our society to be beholden to the government for our paycheck, in some kind of broad universal income ploy where we receive a stipend from the state (…with strings attached of course)?" A few days after I wrote that, I saw an article by Chris Pandolfo for Fox / Fox Business that OpenAI's CEO Sam Altman recently invested in a cryptocurrency model (Worldcoin) in which he plans to give out a free limited amount of the currency to people who sign up (by having their eye retinas scanned!). In effect, it's a microcosm of the universal basic income structure I had suspected tech industry progressives and socialist-leaning government figures intended to set up! So is this a clear sign that the AI tech companies are purposely planning to devastate our present economic systems in order to have us depend on them & the government for our paychecks?! We cannot be so beholden to the government or to the tech industry in this manner. They may appear to offer a helping hand in this regard., but they'll take infinitely more from us.
A) JOB TRAINING ASSISTANCE - Whenever you mandate something like a charitable donation or pro-bono work, it almost ceases to become such. But with that being said, since artificial intelligence is slated to replace so many jobs and potentially destroy so many lives, it would be a gesture of goodwill from these tech companies to invest in job training programs for those who seek it in this new world of AI. However, not everyone wants to become an AI coding expert, nor do people simply want to become dull prompt engineer specialists, so this would only resolve a portion of the resulting economic chaos.
B) LIMITING THE USE OF A.I. & THE IMPACT OF MARKET FORCES - It's my personal opinion that artificial intelligence should be used to only to assist humans and not replace them. As such, I believe that the use and applications of AI should be severely limited across the board and should not be permitted to displace people from their livelihoods. However, knowing how corrupt some of our elected leaders are, and realizing that the tech industry is already gearing up to send off lobbyists to Congress to champion their crooked causes, my stated hope may only be wishful thinking. I suppose if people really wanted to begin to halt the ever-maddening progress of AI systems, they could simply stop using chatbots, AI image generators, and such. Because every time we help train AI systems, they get better and more efficient. At some point, maybe people will start to realize we've been complicit in our own society's demise.
8) ARTISTS & CREATIVES
While I've already written about some steps artists can take to protect themselves and their work (…you can read my previous post here), there's still more I'd like to add to the conversation.
It's absolutely hilarious hearing some of the commentary on this issue from those with an obvious vested interest in the success of AI technology. University computer programmers tout the benefits of artificial intelligence while glossing over the inherent dangers of it & not even discussing the mostly unethical sourcing used in their training processes. Executives involved in new AI imaging companies glibly claim that their software will unleash a new era of productivity and creativity for artists. Apparently they have no concept of art history. From the Stone Age to the Renaissance to the birth of Jazz and Rock & Roll, creatives have proven that they don't need (competing) software to be productive or highly inventive. And for AI companies who claim that artists will be their primary customers, it's mind-boggling how they've overlooked the simple "outside-in" marketing concept: instead of consulting with artists beforehand, listening to them & asking intelligent questions about what they really need, they simply develop some software, throw it at them, and expect a high degree of customer satisfaction! (And adding insult to injury, it's a well-known admission that many AI companies stole the work of creatives along the way…) Marketing directors for AI film productions laughably claim that no one uses traditional art methods anymore, LOL! Is it any wonder that the level of customer service & attentiveness to clients is in such a sorry state with people like that in today's business world?! And many whine that if America doesn't get ahead in the artificial intelligence race, then Communist China will end up being one of the world's frontrunners, if not the leading developer of AI systems and software. But you have to remember that many of America's tech companies & businesses have willingly handed over both their intellectual property & technological advancements to China over the years, all for the benefit of entering that potentially lucrative business market. (Even if America wins in the AI race, we lose with that kind of treasonous mindset among some of our largest companies…)
After reading all this, you might think I'm some kind of anti-technology activist, a relic of the ancient world who advocates a return to chiseling stone tablets for our writing & artistic endeavors! But a good portion of my creative activity has employed computers and technology in some form or another. (…And hello! This page and my entire website incorporate a degree of technical know-how as well, specifically in terms of the code I've written.) However, the digital tools I've employed are exactly that. They're tools and not replacements for people or human creativity. It's quite telling how the writer's strike has gone on for so long here in America, when the WGA's requests appear simple and straightforward. While a good portion of their grievances stem from the unethical working conditions & insufficient royalties from streaming projects, one of their reported claims was also an expressed desire to bar the film studios from using AI chatbots for scriptwriting, only to have them subsequently call up a human screenwriter to fix the sub-par script at a reduced rate of pay. Apparently the advent of new technology always seems to bring out the worst in some people and usually results in certain organizations attempting to take advantage of creatives.
With all this in mind, allow me to offer up a few more recommendations for creatives in this new world of artificial intelligence:
A) UPDATE YOUR TERMS OF USE - I already specified this recommendation in my previous article about AI, but allow me to reinforce the importance of having a solid Terms of Use page for artists and creatives who have an online presence. There are different takes on whether scraping content on someone's website is legal (especially if it's intended for commercial purposes). And I passed by an article headline the other day on this very subject, so if I find new info in this regard, I'll update this point. However, in May of 2017 datadome.co posted an insightful report titled "How to Use Terms and Conditions for Web Scraping Protection". They discussed how the Court of Justice of the European Union sided with airline Ryanair who had complained about a third-party (PR Aviation) scraping their website for content and then using that collected information for commercial profit on their own site. The Court basically ruled that businesses in Europe can restrict the uses of scraped data by having relevant Terms of Use on their site.
Now, will a well-written Terms & Conditions page give you a legal win against similar copyright violations or intellectual property theft here in America? I'm not an intellectual property lawyer, so I can't say. And especially in light of all the new lawsuits this past year against AI companies for their underhanded sourcing of their software. As these lawsuits are still pending, it's still an unsettled legal landscape and it's difficult to determine whether the courts will rule favorably for artists & creators or not. However, it can't hurt to have a proper Terms of Use section on your website to clarify what's allowed and what's not. Again, seek out appropriate legal counsel for more and better information regarding this.
B) GIG & WORK FLEXIBILITY - Since many artists typically dabble in more than one medium, this recommendation is likely another example of the age-old act of preaching to the choir, so to speak. But yes, creatives should be more open to doing different types of work. If all your work has been with digital painting, you might consider trying your hand at traditional painting, which could lead to opportunities hat AI can't easily replace (painting live portraits, creating interior or building facade murals, etc.) If your niche has been exclusively writing music, perhaps you can try your hand at a few live coffee-house performances. During the coronavirus lockdowns, some people purposely chose to thrive and reinvent themselves. Comedian and actor Kevin Nealon dedicated himself to learning how to use a digital tablet, and if you've seen some of his resulting digital caricatures, they're actually quite good! Whether or not Mr. Nealon's brand of comedy is your cup of tea, we can all learn from his commitment to self-improvement and his conscious choice to become better even when life turns things upside down for us. And besides, we're artists. We're supposed to be good at coming up with creative solutions to visual problems. Why should real-world problems be any different?
C) MULTIPLE STREAMS OF INCOME - I once read a book by an established graphic designer who shared some business wisdom from one of his mentors. He told the budding creative, "Make some money from your job, make some money from your hobby, and make some money from your money" (paraphrased). In short, this is basically an admonition to develop multiple streams of income from different (yet all legitimate!) sources. First you have your primary job, which up to this point has likely provided the lion's share of your income. Then most of us have at least one hobby we engage in, and while that's usually something we reserve as a source of distraction from the stresses of living, in more challenging times we may need to develop that as a side business with all the proper legal & tax foundations. And thirdly, for those of us who have the benefit of having some additional money saved away, this business mentor advised using that money to make more money. It's obviously not referring to gambling with your spare money or risking it in some Ponzi scheme or an obscure cryptocurrency business model. There are legitimate investment opportunities out there, and if this is the path for you, I'd encourage you to seek advice from an investment professional in this regard. (However, I strongly advise you to never invest in a financial model that you don't understand or that you are unable to verify! There are tons of scammers out there, not only with some of the newer web 3.0 financial setups, but also with established "professionals" who are simply careless with people's money.) So at a time when the livelihoods of so many artists have already been threatened by the intrusion of artificial intelligence, developing other sources of income is not merely a strategy for the well-off, it's becoming a necessity for everyday creatives.
And artists, photographers, musicians, writers, and some designers & coders have the added benefit of being able to earn royalties from the licensing of their work. One musician even expressed an interest in allowing AI applications use her voice as a sample for their projects, as long as she received a 50% cut from their profits. (Curiously, however, I didn't hear a peep out of the AI companies in response. Which again leads me to believe that many of these tech companies want to continue to unethically source artwork, photos, music, voices, etc. for their commercial ventures without paying royalties to creatives or reimbursing them financially!) Of course, some licensing & royalty structures can be very lucrative for established artists, but sometimes the royalties don't even come close to measuring up to our true value as artists. Which brings me to my next point…
D) SHORT-TERM PROFIT ∕ LONG TERM LOSS - While many AI companies have shamelessly admitted to scraping millions of images, text, or data from the internet without permission and others refuse to state exactly where their dataset was sourced from, there are a few who trained their software in a somewhat more ethical manner. Shutterstock reportedly allowed creatives in their stock programs to opt-in to permitting the use of their work for AI training. (Note, however, that the U.S. Copyright Office has made it clear that only content by humans can be registered for copyright protections, irregardless of whether or not that AI-generated content has been ethically sourced or not.)
And I was about to include Adobe Stock in the list above, since they heavily promoted their Firefly AI as ethically sourced software, drawing exclusively from the creative well of their registered artists & photographers. However, I just read that some of their stock artists are considerably upset because they reportedly never received an e-mail from Adobe stating their work was going to be used for AI training and they were reportedly never given a chance to opt-out of the program! ( * Added 7/13/23: See Sharon Goldman's June 20, 2023 article at VentureBeat.com ) Instead, it appears Adobe relied on vague language in a version of their Terms of Use that supposedly implied consent from their stock artists as long as they continued using the services & contributing images!! If those reports are true: 1) That sounds exactly like something modern-day Adobe would do! 2) Then I don't know if I would consider the AI training process of Firefly to be truly "ethically sourced", and apparently other artists agree. Would it have hurt Adobe to offer an opt-in option to their stock program participants? (Or perhaps they sought to quickly scoop up some of the best art & photos in the world for their datasets and couldn't be bothered with any such courteous and, well, more ethical gestures towards their artists.)
Listen, I realize that stock or micro-stock royalty payments may be initially beneficial for creatives, especially after industry-wide shutdowns and income loss from the pandemic. But it's important for us to consider the long term impact of consenting to the use of our work for AI training. I once glanced at a contributor's breakdown of the payment structure at Adobe Stock, and depending on how many credits a purchasing client had, the contributing creative would only end up earning a measly 3 cents for licensing their image to that specific customer!! Does this fee structure apply across the board to all contributors? I'm not sure, since I don't participate in Adobe Stock (or in Shutterstock). And there are too many other stock image companies for me to provide a breakdown of all their fees here.
However, creatives can crunch a few numbers for themselves to determine whether contributing to AI sourcing is really worth it. Many creatives are already losing work due to artificial intelligence, and if you're in that position and you're also receiving royalties from contributing your work to AI database sources, you can make the following quick calculation to see if those royalties offset any corresponding income loss.
(And note, the yearly Revenues below should be the Gross Revenues for your business for each year, which doesn't include deductions from any discounts you've given or refunds you've made. The Revenue numbers you use below are also supposed to be the totals before any taxes are deducted, since in some cases, taxes change from year to year. By using a purer Revenue quantity, you can get a more realistic idea of the specific impacts here.)
GROSS REVENUE YR 1: Higher Gross Revenues total for the first year
GROSS REVENUE YR 2: Lower Gross Revenues total since you lost money in this most recent year
INCOME FROM A.I. ROYALTIES > ( GROSS REVENUE YR 1 − GROSS REVENUE YR 2 )
So once you've performed the subtraction calculation in parentheses first, you can make the comparison with the quantity you've plugged in on the left side of this expression. Ideally, as shown above, all your royalties from contributing to AI database training should be "Greater Than" any revenue loss and should offset the profits you're losing due to the introduction of artificial intelligence into the creative market. (And I realize this expression isn't perfect. Sometime revenue loss may be due to other factors. And of course the pandemic threw off a lot of year-to-year income streams for artists. But many artists can tell right away when AI is directly replacing them. It's always a telltale hint when a magazine you used to draw covers for proudly announces the skin of their latest zine was done entirely using artificial intelligence!)
Unfortunately, as many AI stock artists are noticing, the very system they're contributing to is more often than not competing with the creative work they're personally trying to sell. So in reality, the following expression may more accurately convey their revenue streams:
INCOME FROM A.I. ROYALTIES < ( GROSS REVENUE YEAR 1 − GROSS REVENUE YEAR 2 )
So if the expression directly above represents your revenue streams on a regular basis, and your total income from AI stock royalties is consistently "Less Than" your gross revenue losses, you really might consider re-evaluating your situation. If your AI stock royalties are so minimal that they don't come close to offsetting the profits you're losing due to the general introduction of AI into creative fields, is it really worth participating in such AI contributor programs? Or are you basically just contributing your work to a system that may eventually seek to displace you and other creatives?
This comparison exercise is presented just to get creatives thinking about the viability of such AI royalty programs. It's not meant to be a substitute for consulting a financial expert or a certified accountant, and I would recommend you seek out certified financial experts for any relevant questions or concerns in this regard.
E) BLOCKING A.I. IP ADDRESSES & BOTS - As I already mentioned extensively in the INPUT section, OpenAI recently revealed one IP address range as well as the name of the bot that they use for internet scraping. I'm a bit hesitant to provide the link to OpenAI's blog post detailing this (simply because I'm not keen on redirecting people to OpenAI), but if you're intent on finding it, a quick web search will lead you to it. On August 7, 2023, search engine writer Barry Schwartz posted an article at "www.seroundtable.com" providing an excellent summary of the key facts, and for my purposes here, I'll refer you to his article.
The IP range that Mr. Schwartz reported for the OpenAI GPTbot is as follows:
40.83.2.64/28
So if you're a writer, a journalist, a photographer, an artist, or any other type of creative who doesn't want to allow your online work to fuel the training of the upcoming GPT-5 artificial intelligence model, then you can block the above IP address in your website Security settings. Different admin panels have different methods of accomplishing this, some more straightforward than others. And as far as I can tell, you should also be able to accomplish this if you have a Wordpress or Blogger site. In any case, refer to their appropriate documentation or help center for more details. In my Artist Resources section, I've covered just a few of the dozens of portfolio sites available to artists, and while they offer a quick way for creatives to get a portfolio online, they don't always provide the level of customization of web hosting companies. So they may or may not allow you to block a specific IP address from accessing your online portfolio of work. I repeat, consult with their documentation or message them directly if you can regarding this. (Disclaimer: All this information is provided as is, without any warranty of any kind. Before blocking any IP address from your site, verify that it's truly the one you intend to block. It's frustrating to your site visitors to block innocent and legitimate traffic. Also, be careful not to block your own IP address(es) or IP range, consequently locking yourself out from accessing your own site!)
Personally, I believe blocking malicious IP addresses (or in this case unwanted AI bots) is a more effective tactic than naming an unwanted bot in your Robots.txt file, because as I've repeatedly mentioned before, web crawlers can ignore those preferences. However, for those still interested in sending instructions to 1) OpenAI's ChatGPT web crawler; 2) OpenAI's ChatGPT plugin (as noted by Mike King on X, formerly known as Twitter); and 3) Common Crawl's bot (a web crawler that scrapes for some AI companies, as stated by Neil Clarke in Alistair Barr's Business Insider article in August 2023) , here's the text for it:
User-agent: GPTbot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: CCbot
Disallow: /
That isn't a complete Robots.txt file, but it's simply a snippet to specify instructions for these three bots. Since this article is long enough as it is, I won't go into details about Crawl-Delays & such or how to craft a complete Robots.txt file. But it's probably the easiest page of code you'll ever write (…if it can even be called code!), and educational resources on writing one are widely available online. Two quick notes about writing these: 1) Once you've written one, it usually goes into the main directory of your website, and for bloggers you may have to "Enable a custom robots.txt" & take a few other steps first (again, check the documentation for your application!); 2) Since a Robots.txt file is a publicly viewable document, DO NOT include the names of any password-protected directories & such within it! You may think delineating all your protected directories is a clever way to block bot access to sensitive files, but in reality you're just making it easier for hackers to locate your protected files!
As I've already mentioned, these are some of the steps that we as creatives must take to protect our online work. Because who really knows what GPT-5 will bring? It may warp our portrait drawings into Barbie-esque type figurines or it might give us the ability to morph our vocals into a more refined version of Alvin & the Chipmunks. Whatever form upcoming artificial intelligence models devolve into, it's better to take appropriate steps of prevention now, before our work becomes a garbled & diluted part of AI's ongoing development.
F) PROTECTIONS FOR AUTHORS - Earlier this month, Benj Edwards of Arstechnica covered a story highlighting how scammers are uploading bogus versions of e-books (often under the names of popular authors) and then playing the algorithms of well-known sites like Amazon or Goodreads to rise to the top of trending or recommended lists. And you guessed it, in some cases, the mediocre content is obviously generated by artificial intelligence. Solving something like this may involve going back to my previous recommendations of digitally watermarking all output from AI systems so digital publishers can immediately recognize this type of content & subsequently take a closer look at flagged text to see if it meets their standards or if the body of the book really matches what the title claims it is. However, in the meantime, innocent authors are bearing the consequences of having scammers not only devalue their name with worthless uploads, but they're likely also losing money while con artists trade on the good reputation of their name (…what's left of it anyway).
Can't we set up some kind of verification system to ensure these scammers don't prosper? If you have an established author, or even one who has published a few e-books already, surely there's a way to verify the identity of these individuals. Once that's done, any other e-books supposedly authored by this same person should be approved by this verified account first before permitting the publication of bogus content. Otherwise, subpar & misappropriated work may eventually bring down the overall value of entire digital publishing sites.
G) MADE WITHOUT A.I. - Centuries ago in my native country, the people had almost obscene quantities of gold and considerable stashes of silver, as well. When an arriving conqueror demanded they fill an entire room with gold, they had no problem doing so. Yet these rare and precious metals weren't what my ancestors prized the most. Instead, they treasured items such as hand-woven tapestries that required hours upon hours of painstaking work by skilled craftspeople to complete! Think about that for a moment. Fine craftsmanship and artisan skill were of greater value to them than any currency of the day!
Maybe my ancestors were on to something. With the endless influx of mindless, look-alike AI output today, a few (a very few, I might add) insightful writers are predicting a time when society will again seek out handmade and human-created artwork. After the feverish rush to saturate our senses with digitized replicas of art, perhaps these writers are right. Perhaps our culture will be so bloated with the identical & the mundane that people will yearn to marvel at and enjoy the fruits of true creativity by humans again.
It's unclear whether the tide will turn today or tomorrow. Either way, in the coming days I fully intend to label this website of mine as completely produced by a human, without the assistance of artificial intelligence. Now of course it's okay for some to incorporate AI in their creative work (as long as you're honest and transparent about it). But at least for now, my unapologetic statement for my personal work will clearly be "Made Without A.I.".
Of course there are a number of other industries expected to be affected by the rise of artificial intelligence, but I continue to find it interesting how AI companies made the biggest splash in the past year by targeting & misappropriating the work of artists and creatives. I realize that some consider artists to be glorified doodlers, and others possibly expected us to not even raise a ruckus about copyright violations, image theft, and unethical sourcing procedures. But as is evidenced in the past several months, artists are taking a stand and a loud one at that. And I'll continue to stand with them for the inherent rights of artists and creatives. ( * Added 7/10/23: And bravo to tech journalists such as Benj Edwards of Ars Technica who have done a truly excellent job of reporting on the developments in artificial intelligence.) Sure, some tech companies & business conglomerates will keep trying to steamroll creatives. And others will still label artists as wacky fruitcakes on the fringes of society. But as I've already mentioned previously, the world needs artists and the wonders & beauty they can bring. And people shouldn't be too hasty in dismissing creatives and their contributions. Because I truly believe there are artists out there that will change the world.
ADDENDUM: A.I. LEGISLATION & PENDING LAWSUITS
Trying to keep up with all the current lawsuits and all of the proposed legislation pertaining to AI could be a side job in and of itself, so I'm obviously not going to try to post every single relevant item. However, some of the pending litigation & attempts at regulation raise some important issues, and although I'm not a lawyer, I cant't help but offer my two cents on a few of these key topics.
(The first few subheadings accompanied by actual Legislation Numbers were sourced from the Brennan Center for Justice. It's not an organization I usually reference, but I heard about this convenient collection they had assembled that perfectly suited my purposes for this section. And even though I'm naming a few of these proposals here, that doesn't constitute my full endorsement, since I haven't read them in their entirety. There's always the chance that a few of these bills will be stuffed with a multitude of other unnecessary items, and some of these proposals may have long since been tabled during the legislative process. But the core of these first few common-sense proposals show that Congress can actually come up with very appropriate legislation regarding AI.)
ADVISORY FOR AI-GENERATED CONTENT ACT (S.2765) - September 12, 2023
Proposed by Senator Pete Rickets (R - NE), this legislation calls for watermarking AI content (which is what some artists & experts have been recommending for some time now). As challenging as this may end up being, it's clear there needs to be a way to distinguish content generated by artificial intelligence systems.
DEEPFAKES ACCOUNTABILITY ACT (H.R. 5586) - September 20, 2023
Representative Yvette Clarke (D - NY) presented this bill that would require disclosure for created deepfakes and would institute criminal & civil penalties for failing to do so. However, it's important to ask, does every deepfake rise to the level of being produced with criminal intent? Not likely. But while there obviously needs to remain a place for parody in our society, too many individuals & some agencies have been misappropriating the likeness of others with no legal consequences whatsoever.
NO ROBOT BOSSES ACT (S.2419) - July 20, 2023
Bravo to Senator Robert P. Casey Jr. (D - PA) for drafting legislation that would restrict autonomous decisions by employers! As I already wrote in my A.I. Decision-Making section above, "humans should not be ruled by soulless software". Enough said.
THE FAIR ACT - Leave it to Adobe, to, well, behave like Adobe. In September of 2023, David Meyer of Fortune revealed that Adobe had proposed the (so-called) FAIR Act (Federal Anti-Impersonation Right) in July of this year. His headline there effectively summarizes the issue: "Adobe wants victims of GenAI impersonation to sue the impersonator, not the tool." In effect, Adobe is proposing that tech companies should be free from liability for any impersonation or artistic style mimicry, etc., and only individual bad actors who create content with their tech that infringes on the rights of others should be held accountable. And this is so typical of big corporations: They want all the benefits of their products or software and none of the liability. That's almost like proposing that only street-level drug dealers should be held accountable for opioid overdoses, while overlooking the harm that certain negligent doctors or pharmaceutical companies have contributed to the problem.
While many of the opinions in conservative outlets are pro-AI, the other day I came across a quote by Jon Schweppe (Policy Director at American Principles Project) that echoes the need for accountability I've been espousing all year in both of my AI articles here. In an 11/23/23 Fox News article by Michael Lee, Mr. Schweppe stated, "AI companies and their creators should be held liable for everything their AI does, and Congress should create a private right of action giving citizens their day in court when AI harms them in a material way. This fear of liability would lead to self-correction in the marketplace…" Of course, these calls for corporate accountability are diametrically opposed to proposals such as the FAIR Act or other similar legislation. However, we can't ignore the potential for harm introduced by certain tech companies through their AI software. Whether through negligence, sloppiness, or brazen callousness, accountability should start at the very top of the product chain.
LAWSUIT AGAINST MIDJOURNEY, STABLE DIFFUSION, & DEVIANT ART - On October 31, 2023, a Petapixel article by Matt Growcoot reported that U.S. District Court Judge William H. Orrick dismissed a portion of the lawsuit against generative AI.. Midjourney, Stable Diffusion, and Deviant Art. Part of the judge's reported reasoning contends that it's not clear whether AI tools like Stable Diffusion retain image copies within their systems.
However, on February 6, 2023, Katyanna Quach wrote an article for The Register showing that DALL-E, Stable Diffusion, & Midjourney do retain images from training datasets in their memory. The research in the article was conducted by members of ETH Zurich, Google's DeepMind, Princeton University, and the University of California, Berkeley. While the total number of exact images that can be extracted by a user-generated prompts is relatively small compared to the size of the training set, it hints that some developers have been less than honest about the memorization capabilities of their generative AI systems.
Which brings me to my next point. When it's so obvious that AI developers stole & misappropriated content (text, images, photos, etc.) and some AI CEOs have even publicly admitted to doing so, I don't understand why this type of behavior isn't being considered and treated as piracy. Lets take a look at the words of honorable Judge Joseph Story in the 1841 Folsom v. March case: "It is certainly not necessary, to constitute an invasion of copyright, that the whole of a work should be copied, or even a large portion of it, in form or in substance. If so much is taken, that the value of the original is sensibly diminished, or the labors of the original author are substantially to an injurious extent appropriated by another, that is sufficient, in point of law, to constitute a piracy pro tanto."
I'm sure there have been other cases dealing with this issue since that 19th century verdict, but I'm still wondering why hundreds of years of copyright law & precedents have been ignored when it comes to the unscrupulous actions of the AI companies.
COPYRIGHT OFFICE REQUEST FOR PUBLIC COMMENT - I previously praised the U.S. Copyright Office for holding fast to the longstanding legal standards of copyright, but recently they seem to have wavered a bit in the face of pressure, as they've called for public comments regarding whether output from artificial intelligence should be copyrightable. It's interesting how their position seems to be shifting after only a few months, and this comes even after a federal judge ruled in mid-August 2023 that copyright protections apply only to human authors. (Read the article by Winston Cho in Hollywood Reporter on 8/18/23 or the one by Ben Wodecki in aibusiness.com on 8/21/23 for more detailed explanations of the judge's logic. Her reasoning is abundantly clear and reinforces centuries of copyright law.) The U.S. Copyright Office's recent decision to open up issues of copyright to public input makes me wonder if the department's employees are being subjected to the typical political pressures that government employees sometimes face. Because clearly there are politicians, as well as corporations, who have a vested interest (financial or otherwise) in allowing the output of artificial intelligence to be copyrightable. And I'm sure they're not shy about trying to exert their influence or impose their will on whomever might stand in their way.
I don't intend to submit public comments to the corresponding November 15th call, since my beliefs have already been spelled out in the two AI articles I've written here on my site. As a time-saving measure I don't fancy repeating what I've already stated, but I will say this: The problem with copyrighting any current form of AI output is the fact that these systems have been trained (unethically and without permission) on the work of human artists, both living & deceased. Any attempts to apply copyright protections to AI-generated art would be somewhat self-defeating & almost hypocritical, don't you think? Overlooking the copyright infringement that has taken place to train AI systems in order to award copyright protections to such AI-generated output would end up diminishing the value of copyright protections for everyone, and it would even come close to making a mockery of the very concept.
(This Addendum section was added on 11/10/23)
If you're tired of reading about artificial intelligence, I'm almost as tired of writing about it for now, and I really would like to get back to making more art! (Although I can't help but feel I've missed a recommendation from my eighth and final section above…If it formulates in my thoughts, I'll add it and highlight it with an obnoxious color for easy locating!)
I know I haven't minced my words in this article and I've used a more direct tone here, one usually reserved for my more politically-themed posts. However, it's a serious issue that unfortunately a lot of people aren't taking too seriously. Artificial intelligence is expected to cause a ripple effect of job losses across various professions, including lawyers, accountants, bank tellers, & database entry professionals, and the potential for harm & legal breaches such as copyright violations are clearly present in this technology. Yet some Congressional representatives have no qualms about making copyright protections a political issue, instead of dedicating to protect ownership rights for authors, whether they're individuals or large corporations. Even so, it only takes minimal observation to note that many Republicans & conservatives have a history of unapologetically siding with big business at the expense of their constituents, and it's an undeniable fact that many Democrats have no qualms about partnering with tech companies to silence & censor dissenting opinions. As much as I'm trying to be optimistic about the introduction of AI into society, it's clear that We The People need to be realistic about the willingness (or unwillingness) of our elected leaders to apply appropriate limits to this new technology. Sure, some of our representatives may prefer to take a wait-and-see approach, others feel overwhelmed by the rapid developments of AI in such as short time, and of course, others aren't the least bit interested in regulating an industry that may end up being of considerable benefit to them, either politically or financially. Perhaps that's why Congress is already beginning to authorize budgets for the use of artificial intelligence in government, as reported by Fox News recently. Looks like our leaders jumped over the informative hearings & regulation stages regarding AI, and dove right into the financing & implementation stage. What an interesting game of leap frog this will turn out to be!
And for those who believe the tech industry will somehow keep themselves in line concerning AI, just consider some of their newsworthy shortcomings in the past few years: Apple touts its privacy for users, yet it accepted billions of dollars from Google for the privilege of being the default search engine on its iPhones; Apple reportedly gave 3rd party contractors access to Siri voice data, some of which were highly sensitive; Amazon apparently didn't have the proper privacy protections in place to prevent some employees from spying on women users of their Ring camera systems; Google reportedly collected the information of minors who used their Chrome tablets in a school setting at one point; Google apparently had no qualms of turning over to law enforcement the information of innocent users who happened to be in the area where a crime was committed (geo-fencing); a lawsuit alleged that Meta & Pinterest have used algorithms that repeatedly suggest images referencing self-harm, body-shaming, and such, with especially devastating consequences for minors; Microsoft's XBox Live was recently found to have collected and retained the information of minors who used the system; and GitHub reportedly misappropriated the work of some brilliant coders to inform their own brand of artificial intelligence. Doesn't inspire much confidence in the tech industry, does it?
However, the future is still ours to shape. Will we rise up to meet the current challenges with wisdom & integrity or will we shrink back in the midst of such a pivotal time in our history?
❝ ❞ - In the unlikely event that anyone would like to quote a few excerpts from this article of mine or implement it in any of their regulations, first of all I'd be honored. Although I didn't write this to secure my fifteen minutes of fame, if you'd like to quote from here or incorporate it in guidelines or regulations, all I ask is that you give proper attribution & credit where credit's due, and restrict your usage of any portion of this article to non-commercial use only. (In other words, don't sell this article or any part of it for your personal profit or business.) Also, publishing this article in its entirety anywhere without my explicit consent and permission would be frowned upon. Thank you!
📝 - Since this is a really long piece, if you want to take note of all the specific changes I recently made, simply hit the "Refresh" button on your web browser as you scroll down and read, and the orange highlighted text will appear again. (And again and again, as long as you keep clicking the Refresh button!)
🖨 - If you're trying to print this article without the orange text, just wait about a minute until the highlighted color completely dissipates on your screen. Then follow your normal printing procedure and the text will print out in its finalized colors. It's like HTML & printer magic!
July 10, 2023 - I've now modified the code and specified a smaller printed text size for this entire article. This shaves almost 25 pages off the total final output, and saves a few trees in the process! Of course if you need the text larger or smaller, you can always scale it up or down in your print dialog before you send it to your printer.
July 13, 2023 - Different printers respond slightly differently to the instructions & pages sent to them. In some instances, the return to top icon that appears here on the bottom left-hand side of your browser only prints out on one page, but in other cases, it inconveniently prints out on every page! So I went ahead and excluded the icon from the printing specifications. (Also, I've noticed that clicking that icon bumps you back to the very top of this article in some web browsers, instead of bringing you directly to the Table of Contents as I coded it to do. Just a little frustrating! But I'm trying to get this to work correctly across other browsers…)
[ The recent additions to this article are highlighted momentarily in orange and were posted on July 7, 2023. ]
[ A few more journalist & media credits for certain news story references were added on July 10 and July 13, 2023, and these additions remain highlighted in orange for a few seconds longer.
One of the problems artists have with AI generative image models stems from the fact that developers have appropriated work without giving proper credit. So it would be a bit hypocritical of me to not provide credit to the diligent journalists who have done stellar work covering the topics found in this article. Of course, the ideas I've presented here are my own, but it would have been a less substantive piece if my knowledge of AI hadn't been informed by their effective coverage of the issues involved. Sure, sometimes I absorb (apolitical) tech news from a wide variety of sources, and I came across a few of the historical tech tidbits long before I even considered writing this article. And noting every source wasn't really a priority back then. However, if I do find a corresponding name to credit for a tech or historical reference, I'll definitely make the appropriate edits and include it. Thank you! ]
[ August 10, 2023: The latest additions to this article are briefly highlighted in a popsicle purple tone. As opposed to incorporating all the hues of the color spectrum to delineate new updates, all future edits / additions will be marked with this color, and the editing date will be added alongside any significant changes in subsequent postings. ]