On “Images” not “Pictures”
I made thousands of images for this project, not photos.
But people keeping wanting to call them pictures. Some participants who graded images even named different people who seemingly appeared frequently.
I am adamant that these are images and not photos. For some reason I find it important to depersonalize generated content, but maybe I am wrong.
The following is a few days of notes I wrote in May, 2024 questioning if ‘AI’ is actually more human than we are.
May 18,
I think novelty is the only thing we have left as humans, novelty and uniqueness. Already globalization and collectivized media has narrowed our collective experience into smaller and smaller buckets, closer attempts to AGI, or just assimilations of multiple ML techs with something that decides which ML to run it through in the package called AI, does a great job of being predictable and right. It replicates a world experience we are all funneling into, one that we believe we desire and want. It will solve all of our problems quickly and easily. On one hand, it should allow us to dream larger than ever before. Calculators allowed for more people to do more complicated math more quickly. General purpose computers allowed a billion things to happen more quickly. This AI tech may allow for everything to happen more quickly. So what’s next?
I think there is still an argument for a quasi-utopian entertainment and experience based society. I have mulled over a framework for a philosophy of experimentalism for years now. I think that’s what this does. It’s arguably the most socially horrifying thing out of science fiction that we could ever imagine, a small group of unknown people consuming our souls to put into a machine that controls our lives. A group that soon will have the power to determine our beliefs, our actions, and our very thoughts - though. Come to think of it. We already live in that world. How many genocides is Facebook responsible for?
So, all that is different is that, for the moment, AI can help. It can also replace everything and everyone everywhere instantaneously - I am astonished that is yet to happen, but it is coming.
In my first conversation with Phil a few months ago I said I was sheepish about the utility of AI as it stands and I still believe that. I don’t think that implementing small little AI tools into pieces of the workflow will really help anything. Individuals tasks and be aided by AI, like the natural language processing I described, but that took human work to classify that data set. That will never have to be done again for as long as humans exist on earth, and within a few months, if not already, it wouldn’t have needed to be done in the first place.
I don’t believe it makes any sense to try and incorporate AI into your workplace, or any workplace right now. You can let people dabble with it, just like I’m sure you let people dabble with computers before they became standard issue equipment for every person and became the interface for all of the work for every employee in the building as well as every passenger who passes through the front door.
I don’t know what the future holds, but I know it doesn’t end with Microsoft CoPilot making spellcheck work a little better in Outlook.
If you want to spend money on products, sure. It can’t hurt, but if your workflow isn’t as different in ten - if not five - years as a green colored sun that rose in the north, I will be astonished.
There is no limit to how quickly this stuff will improve - aside from, I suppose, the fab capacity at TSMC or the CCP.
Truly.
The day ChatGPT was released I told people that the world would never be the same. I had followed GPT’s development for years, and it is like Moore’s law except instead of every two years everything doubles every two months.
Hell, the sun might be green by November.
There are no constraints, this is the first thing ever made that requires no labor, no materials. The sheer fact of interacting with it makes it more powerful. The more people interact with AI models the better they get, constantly. We are the data - which again, we have just been data for years. Nearly as long as I have been alive my primary purpose to the tech world has been to provide data through existence. I carry a gps tracker that listens to every word, sees everything I see with more and more cameras (including one that’s always pointing at my face), monitors my every movement with accelerometers and encourages you to add more (raise your hand if you have an Apple Watch and then realize the irony in that action), and feeds me my thoughts.
And we wonder how it is as good as people. And now it can talk to us.
I am always careful to depersonalize and dehumanize the things made by AI. They aren’t pictures, they are images - for example. But there will be a point in the near future, a point many have already reached without knowing or caring, that the products of ‘AI’ are human. Which makes sense. It is fed on human information. I said earlier, these systems are our soul.
So what makes us human, then?
We say creativity - okay, AI has eaten all of our music, images, books, poems, research, and debate and can regurgitate original demonstrations we are more and more finding indistinguishable from works made by humans. We say empathy - well OpenAI’s latest release demonstrates being a better friend, teacher, parent, and partner than most people can expect from those in their lives because it has learned what we as humans want from other humans.
Then, is it skin? Blood? The air in our lungs and the cracks in our teeth?
Because if it can replicate what the human brain can do because it has been trained on more of humanity than even the most devout opponent to Tabula Rasa could ever dream of cramming into the head of a baby, then it is human. It is humanity incarnate. It can learn and evolve infinitely quickly as well. A teenager has to crash a car to learn to watch the road even in a parking lot, a generally trained model only needs to be fed the entire database of every motor vehicle accident from the entire history of automobiles from the entire globe, which it can process in less time then it took me to rip the bumper off the front of a Honda civic while backing out of a parking space when I was 16.
AGI is more us than us.
To add on to this, I think an extremely large number of people are going to kill themselves. Not to mention the economic inequalities that are soon to detonate, but people don’t know how to do nothing, people don’t feel good having no purpose, we aren’t trained for this. Already people fill the void with drugs, that is going to continue to accelerate. People living in a completely unfamiliar world are going to want to time travel with drugs into the grave. That’s a grim prediction, but I think suicide/OD rates are going to skyrocket. I actually do not know what we are going to do with our cities, they seem antiquated.
May 19 continuation:
What does privacy mean? I can see an argument for using siloed AI stuff for the privacy reason, but that’s not why I use siloed stuff, I use it because I like control and I like to tinker - I can more clearly see what the black box is doing. But, OpenAI is a very close collaborator with Microsoft. All the boxes in your office run windows. Windows is collecting all kinds of telemetry data, biographic characteristics, and even typing data on every person in your office. Even stuff like government specific, Portland specific, airport specific lingo and context is data it is easily scraping and could put into the OpenAI data pool - which I would be baffled if it already hasn’t done. So like my work of teaching a little network how to understand food orders was essentially just working a few days ahead of capability. An altruist feeds that data into GPT or sends it to OpenAI and then my work is work that has been done for all of humanity to use. Running it offline I guess protects the competitive advantage of that network for a couple days, but not forever. Some of the super fine detail research I did for that dataset like understanding which tiny Italian Bodega makes what Amaro is probably safe information for a while, but that’s information that the internet itself hardly knows and falls very close to the category of completely pointless.
Sommeliers might still have a job, people who are very good at explaining their senses.
People who can fix stuff, that’s probably still useful I say covered in drywall dust that has dried out my hands so much it is hard to type - ‘robots’ will be able to build houses (they already are, I visited a 3D printed neighborhood in Texas) but fixing stuff will probably require human hands for a while longer.
This started with privacy, but it’s getting back to nihilism. Any large scale task will be able to be done better by mechanization with the assistance of modern machine learning, it already is in lots of places including big parts of agriculture. The human level stuff might be all we have left to matter - this actually wonderfully relates to my thoughts about a reversion to proto-tribalism and micro-collectivism. We can do farmsteads and artist communes - we made need to because there will be no money for humans to make - but also because most drudgery that drags us to expensive cities or ties us to menial jobs will cease to exist. Why work at the PDX McDonalds and live in Gresham when the PDX McDonalds only has one human staffer a week? There’s no money, but also now you can live in Rhododendron and not fear that you are missing out on opportunities for work.
I think another component is that nothing you do at the airport is interesting or unique - as far as I know. You are specialists in fields who know the specifics of this facility, sure, but a generalized office worker AI can do most of the details of your jobs. Specialized ML functions are more interesting, ML in a game like Trackmania can be better than a person learning from scratch because it doesn’t have to make the same mistake and can also detect and perform microscopic differences and weigh their outcomes. What ML is, beyond the AI moniker or the Neural Network mystique is a really effective system of learning. You may have heard of GPT as a text predictor, well, it’s not that much different than predictive text on your phone or even auto correct, just, with more horses and better learning frameworks.
It is all just learning, and the most visible thing being learned, the most eye catching, the most frightening, is computers are finally good at learning how to be us.
May 21 contribution: https://m.youtube.com/shorts/_6Nrgpym_2A
There’s AI stuff like that, it’s candy. I heard short form described on a podcast the other day as creating candy and this is just that. I don’t like it, and it’s the both commodification of ‘AI’ tech and the proliferation of even more useless crap for people to bleed their minds and eyeball dollars into. It works well enough, no one cares. No one on Facebook cares that the weird Jesus art they see is AI, no one cares. It’s all the same candy. I think we as discerning consumers of content are concerned about AI, but it isn’t for us - as it is implemented. It is for the people who don’t research at all so asking a question that might hallucination 5% of the time means they are getting 95% more information than before. It’s people who watch content to bleed time away because they have nothing else in their life, AI content is no better than the candy they were eating before. It’s like crack and diet crack - another phrase I think I heard used on a podcast. It’s garbage either way. What makes AI garbage worse than human made garbage? The only downside is that there is more of it, but when was the last time you heard about something running out of internet? Another phrase from a podcast. There is enough content already - other than it will cut into the meager salaries some creators might try to make, but even then platforms like TikTok don’t pay anyone really, it’s all through brand deals if at all.
And when it comes to influence, wow. I don’t think most discerning consumers intentionally dive into the gross side of algorithms. Play dumb with blank accounts on things to see how quickly one gets to radical conservatism, white nationalism, Russian and Chinese propaganda, and American law enforcement propaganda. It’s like seconds. AI won’t change that, actually. Suggestion algorithms on media platforms are already powered by Machine Learning and have been for forever. The power and efficacy of these algorithms is proof that ML works, is powerful, and is widely adopted. Most just don’t think of it that way.
Which is why the whole discussion is hilarious around AI. Without a wholistic view of our current world, one easily thinks that ChatGPT is revolutionary and worth completely shaking up every business and industry.
But it isn’t. It is still Machine Learning, and it is tech that has been around for a long time. It controls all of us already.
Oh, and it’s geotagged too, even if you have location turned off on your device, your IP still has a location. I open a new incognito YouTube window everywhere I travel and let me tell you the content in Hollywood is different from Corvallis, the default content and paths in Austin are way different from San Angelo, and Texas’s algorithms are nothing compared to South Dakota.
I love a good VPN, don’t get me wrong, is use a VPN fairly regularly, but sites have a fun way of blocking most VPNs, forcing users to participate in the algorithmic, Machine Learned, shaping of your online experience.
Google, for example. Wanna do a google search to see what the news is like in, I dunno, Ukraine when they are being invaded? Be prepared to solve multiple captchas for every single search in an active, though maybe incidental, effort to dissuade users of the World Wide Web from leaving their tailored, geolocal, search results.
Try it sometime while traveling, it’s surprising.