One of the most popular NFT categories in 2022 is artificial intelligence generated artwork.
For a small amount of of an Ethereum, you can purchase artwork that was created by combining two of the most exciting emerging technologies, both blockchain and artificial intelligence.
In order to buy and sell these unique AI-generated NFT pieces, you’ll first need to setup a crypto wallet and link it to the OpenSea website.
Among the NFT projects I’ve browsed through, AI-generated Project Argo is one collection that stood out.
What’s interesting about Project Argo is that the artist has been able to use AI to develop artwork that seems to embody characters or entities within each piece.
As seen in Fading above, the NFT appears (to me at least) to feature a face.
Artworks become much more interesting when they appear to feature individual characters and facial expressions.
All types of art – whether painting or literature or music – seems to have a positive or negative connotation.
Some artists create more uplifting artwork (Norman Rockwell or Michelangelo’s Sistine Chapel) while others create artwork that is mischievous or scandalous work, and other artists create powerfully dark and emotional works.
What emotional sentiment do AI-generated NFTs evoke?
Looking through the list of images made by AI, I can’t help but wonder about the baseline emotion evoked by artificial intelligence.
What connotation and sentiment does the neural network create when generating these graphics?
As the lay observer, every photo seems to evoke the same balance of emotion, somewhere between darkness and excitement.
If we had a way to measure emotional sentiment, where might each image fit on a scale that measure how optimistic or pessimistic each image looks?
To me, it feels as though each image is quite close to neutral.
Can art created by artificial intelligence create emotions?
The surprising thing is that, looking through all of these photos on Project Argo, I get the sense that each piece of artwork is depicting entities that are saying and feeling very similar representative emotions.
And yet its difficult to put it into words.
Its like trying to explain what is experienced under the influence of psychedelics – often impossible to do so in worldly terms.
The resemblance to faces in the photos are approximately accurate. Although there is no true face, the outline of eyes, nose, mouth, etc. evoke a baseline facial expression that a person might have when their face is at rest.
The entities depicted in these photos all have the same, neutral facial expression. Not a single smile, eyebrow raise, or wink. They all have the same, neutral look – one that you cannot quite figure out.
Yet the images don’t feel expressionless or emotionless.
They feel as if they are showing life at work. Life doing what life does – here to experience our world for the sake of its own existence.
Except that these entities are in a different place. They don’t exist in our world, but in an unknown realm – completely generated by AI.
What can humans learn from artificial intelligence by viewing its NFT artwork?
Each piece has a that resembles some sort of techno-futuristic Armageddon. “The last island party” and “robot revolution” for example.
Of the list, Dreamy City and Penguin one stood out. They’re all worth a look.
When artificial intelligence designs an image for humans to look at, what is it trying to say to us?
As artificial intelligence looks back at us in society, could the message be “you can do better”?
Perhaps we can gather a sense for what AI feels, by looking at the artwork it creates, and by interpreting the types of creations that arise.
How to buy AI-Generated NFT Artwork:
In order to get involved and actually own a few NFTs, you’re going to need a crypto wallet.
Say what you will about robots taking over, or artificial intelligence tools replacing human labor in the workforce – the company’s website states that AI is here “to make art creation accessible to the masses”.
And to be frank, the creation process couldn’t be simpler or more intuitive – literally anyone can create something.
Here’s one below that I made, and even added a saccharine title.
Is AI Adoption Accelerating?
The ability to build applications that leverage AI as features yet can be used by the layperson even if they are non-technical means that the adoption of AI might end up happening faster that you might expect.
One can only imagine what artificial intelligence generated music might be like as AI applications expand.
I can’t wait for someone to build a way to create your own music using a process and technique that is as simple and easy as this one.
The purpose of Tesla AI Day is to get the world excited about what Tesla is doing in artificial intelligence beyond cars.
AI day is also a recruiting event for prospective engineers as the company ramps up hiring.
Key Takeaways from Tesla AI Day 2021
Make useful AI that people love, and is unequivocally good. – Elon Musk
Vertical integration is a common theme in the presentation. For both software, hardware, neural net training, and more. This means that Tesla builds and designs a large percentage of their technology in house.
The company is able to auto label data sets as well as create simulation data sets with unlimited scenarios for training the neural network.
DOJO is Tesla’s supercomputer designed for one purpose – training neural networks. It will be in use and available next year.
The neural net architecture resembles the visual cortex of an animal.
They will build a humanoid robot (see Tesla Bot at right)
TLDR: skynet is born? Hopefully not. Although Elon said that human-level superintelligence is certainly possible, both the car and the Tesla Bot are examples of building “narrow AI” to avoid AI being misaligned with humans.
FSD beta version 9
FSD (Full-self-driving) is the autonomous system that is deployed to all cars, which customers can purchase for around $10,000.
The often debated fact that Tesla does not use LIDAR, as do many other autonomy oriented companies like GM cruise or Google Waymo, means that its cars use only cameras to gather data about the surroundings and navigate the world. Although the company mentioned plans to upgrade the cameras, the current cameras are still more than good enough.
The philosophy behind this decision is that roads were built for human eyes to see and navigate. Therefore, the cars should be able to gather sufficient data to navigate autonomously using only cameras.
Elon jokingly stated that because of this, someone could technically wear a T-shirt with a stop sign on it, and the car would stop. But ultimately, the company seems confident that cameras will be sufficient.
“It’s clearly headed to way better than human. Without a question.” – Elon Musk
Note: Tesla cars are not yet fully autonomous. Drivers still need to keep their attention and focus on the road at all times. 
Tesla still has yet to reach High Driving Automation level of autonomy (known as Level 4 Autonomy)
FSD driver-assist benefits:
Navigate on Autopilot
Auto Lane Change
Traffic Light and Stop Control
Neural Net Architecture
There are 8 cameras surrounding the video that capture images of the real world. Tesla’s system uses these images to create a 3D reconstruction of the scene in “vector space”.
Using these images and vector space rendering, the system makes predictions about what the car may encounter a few moments into the future, allowing the car to drive itself safely and without running into anything.
As the neural network improves as it is trained on more and more data, Tesla is slowly building a brain-like neural net that resembles the visual cortex of an animal.
The presenters mentioned that everything Tesla is building is fundamentally country agnostic. Although they are optimizing the neural net models for the US at this point, they will be able to extrapolate to other countries as well in the future.
The ability to plan allows the car to predict and make changes about what other cars are doing on the road in real time.
Predictive and planning themed capabilities aside, the upper limit of the neural network has enough power to remember all of the roads and highways on planet Earth.
The presentation spent a significant amount of time diving into the specifics of Tesla’s Neural Net Architecture. For specifics, watch the replay of the 2021 AI day livestream.
Training Neural Networks – Data Required
Every time a human driver gets inside a Tesla, they are helping to train the neural network. Although this may make an incremental improvement, this is not enough training data.
These networks have hundreds of millions of parameters – it is incredibly important to get as many data sets as possible to create 3D renderings in vector space.
Millions of labels are needed, and each piece of data is essentially just a small video clip. Associated with each clip, you have the actual image/video data, odometer information, GPS coordinates, and more.
Tesla needs millions of vector space data sets to train these neural networks. In the spirit of vertical integration, there was formerly a team at Tesla that tediously labeled all of that data. But manual labeling proved to be too slow – there is a better way.
Auto Labeling Data Sets
Tesla developed an auto labeling system, allowing them to generate extremely large training data sets much faster for training the neural network. The auto labeling mechanism is extremely important.
“Without auto labeling, we would not be able to solve the self driving problem.” – Elon Musk
In addition to real-world data sets from camera footage, Tesla also is creating simulations of traffic scenarios.
It is like a video game, where Tesla Autopilot is the player. Simulation is helpful when data is difficult to source, difficult to label, or is in a closed loop.
Algorithms are able to create the simulation scenarios. These algorithms analyze where the system is failing, and then create more data around the failure points to allow the neural network to learn, improve, and handle those scenarios better in the future.
Elon specifically discouraged the use of machine learning because it is extremely difficult, and largely not the right solution for most use cases.
Project Dojo, Tesla’s Supercomputer
Dojo is the name of the neural network training system. Like a training Dojo.
Given all the data and simulations required, there is a demand for speed and capacity in AI neural network training. This is where Dojo comes in.
Currently, it is difficult to scale up bandwidth and reduce latencies, because processors have not been traditionally designed for training neural nets.
This is why Tesla invented the DPU.
DPU – Dojo processing unit. Whereas CPUs and GPUs are not designed to train neural networks, the DPU is designed to train neural networks.
The goal is to achieve the best possible AI training performance, supporting larger complex models while being power efficient and cost effective. Elon said it will be available next year.
This effectively enhances the AI software system, improving FSD.
Dojo leverages a distributed compute architecture.
It is apparently capable of an exaFLOP, which mean is can do a super high number of calculations per second, way more than your average computer. It is a supercomputer, after all…
Tesla will also make Dojo available to other companies that want to train their own neural networks, effectively building a platform for improving neural networks. This feels like an optimum opportunity to apply the “as-a-service” business model to the world of artificial intelligence and neural network training. By licensing out the use of Dojo, Tesla may be able to create yet another revenue stream for the company.
Thay have reportedly innovated in these chips in a way that means there are no roadblocks to extremely high bandwidth.
The software stack is completely vertically integrated. They build everything in house.
As the Dojo computer and neural network data sets improve the neural network, it is likely that the company will deploy the improved brain-like software upgrades via their over-the-air software updates.
Hardware and Computer Chips
One of the biggest goals is minimize latency, and maximize frame rate. These metrics may be familiar to you if you are involved in video games & graphics.
There is a computer in the car that runs the neural network that has been trained by the massive data sets discussed above.
Allegedly, the computer chip for Tesla’s Full Self-Driving system are produced by Samsung. 
The importance of computer chips to Tesla cannot be overstated. Various computer chips are used in all areas of the vehicle – even in the typically non-tech intensive parts of a car: computerized airbags, seat belts, doors and door handles, etc.
Given the reliance on them, the computer chip and semiconductor shortage is an issue across the globe is certainly a hurdle to resolve. 
Tesla Bot – known endearingly by Elon as “Optimus subprime” – a 5 foot 8 inch, 125 pound humanoid robot.
Given that the Tesla car is already essentially a robot, the Tesla Bot will simply use all the same technologies, in a device with a shape like that of a human.
It will make use of all the same tools that Tesla has in the car… such as 8 cameras, FSD computer, etc.
Elon was unfortunately reluctant to share any specific use cases, other than stating vaguely that it will do boring, repetitive, and dangerous tasks that humans do not want to do.
There are still many unknowns. Will the Tesla Bot have features similar to Siri or Amazon Alexa / Echo?
In addition to being a large automaker, Tesla is showing that they are very much a robotics, artificial intelligence, and software company.
Artificial intelligence image synthesis is now able to produce realistic images from simple, hand-drawn shapes.
Nvidia’s Artificial Intelligence tool to create Artwork
The AI based artwork tool is built by Nvidia. The tool is called GauGAN, and is available for anyone to use online for free on Nvidia’s AI playground.
Although there are several implications of this technology, but the free Beta version of the application is fun and allows non-artsy people (like me) to unleash our creative side.
How does GauGAN work?
GauGAN works using a machine learning system that’s known as a Generative Adversarial Network, which uses a statistical approach that allows two agents, known as the generator and the discriminator, to engage in an optimization based competition.
This process is a type of unsupervised learning for AI.
This technique ultimately creates more accurate, high-resolution renderings of photos from hand drawn shapes.
As a user, you apply filters and select various labels from which the system will apply a style transfer algorithm to modify the color composition of basic single-colored areas and turn them into more photorealistic scenes.
How will AI Art Creation tools be used?
For digital graphic related disciplines, GauGAN is already being used by artists to build rapid prototypes of scenes and conceptual designs.
The time saving potential that tools like GauGAN will enable are huge.
For the world of video game development, the same is true. As video game creators like Fortnite embark on advancing their in-game creative mode, similar tools could be added to help players make quick mock-ups of maps and environments.
We’ve seen science fiction movies like Terminator where robots go nutz and are exponentially more powerful than humans. These are cool to watch. They’re just science fiction though. It may seem somewhat silly to consider technologies like artificial intelligence or machine learning becoming as powerful or more powerful than human cognitive abilities. Researchers have been thinking about AI for years:
Many of the smartest people in the world are cautiously fearful of the power that artificial intelligence may bring as it becomes more and more developed. Below I’ve included some important resources for someone interested in impacting the future of AI in a positive way.
“Many experts believe that there is a significant chance that humanity will develop machines more intelligent than ourselves during the 21st century. This could lead to large, rapid improvements in human welfare, but there are good reasons to think that it could also lead to disastrous outcomes. The problem of how one might design a highly intelligent machine to pursue realistic human goals safely is very poorly understood. If AI research continues to advance without enough work going into the research problem of controlling such machines, catastrophic accidents are much more likely to occur. Despite growing recognition of this challenge, fewer than 100 people worldwide are directly working on the problem.” – 80000hours.org
Deepmind, a company acquired by Google in 2014, is a world leader in artificial intelligence research and its application for positive impact.
The founders of Deepmind believe that AI will serve as a multiplier for human ingenuity, increasing our capacity to understand the mysteries of the universe and to tackle some of our most pressing real-world challenges
OpenAI is a non-profit AI research company founded by Elon Musk that seeks to discovering and enact the path to safe Artificial General Intelligence by influencing the conditions under which it is created.
“The best way to predict the future is to invent it.” – Alan Kay
The Center for Human-Compatible AI concisely brings up the problem of control with AI: “given that the solutions developed by such systems are intrinsically unpredictable by humans, it may occur that some such solutions result in negative and perhaps irreversible outcomes for humans.”
The Partnership on AI is an organization founded by partners from Amazon, Apple, Google, Microsoft, Facebook, IBM, Deepmind, and others. The partnership was established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
Additionally, there’s a short video depicting the Unfinished Fable of the Sparrows:
or you can read the story below if you prefer:
The Unfinished Fable of the Sparrows
It was the nest building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.
“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”
“Yes! said another. “And we could use it to look after our elderly and our young”.
“It could give us the advice and keep an eye out for the neighborhood cat,” added a third.
Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”
The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.
Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”
Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing directives set out by Pastus.
Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.
It is not known how the story ends, but the author dedicates this book to Scronkfinkle and his followers.
Well, what do you think? Should more effort be placed on helping Scronkfinkle figure out how to master the art of owl-taming and owl-domestication before trying to raise an owl on their own? Or, should more effort be placed on trying to find an owl egg and raise an owl? What are the dangers of raising an owl for the sparrows?