Category: computing

Self-Driving Autonomous Boats to Clean the Ocean and Waterways

“There’s no one cleaning up after the party…”

Thesis: In addition to reducing pollution, we need to build and deploy floating clean-up robots in waterways and canals across the globe.

Plastic bottles are thought to take about 450 years to break down [1]. Leaving plastic waste in landfills is a less than ideal solution.

And besides, a lot of plastic never makes it to the landfill. Over years, large plastic bottles are broken down into tiny and even microscopic “microplastics” that are so embedded within the ocean, sand, dirt and topsoil of our world that they will never be removed.

Microplastics threaten Great Lakes, and not just the water
Micropplastics [4]

There are a few questions to consider before jumping in:

What should we really do with plastics, styrofoams, and other non-decomposable garbage?

How will we identify, collect, and transport these plastics to a safe and environmentally friendly final resting place?

From the landfill and back again:

Answering these questions is simple in theory, but more challenging in practice.

Plastics are made from petroleum, which comes from deep underground. Petroleum is like natural gas, crude oil, etc. At a chemical level, these petroleum resources are made of hydrocarbon polymers that can be used to make plastic.

Given that plastics are made from petroleum, which comes from underground, the logical place to put plastic waste is back where they came from – deep underground, between 3000 and 6000 feet. (That’s around 1 mile underground) [2]

It only makes sense that we should put them back where they came from. And perhaps the heat and pressure of Earth’s crust could accelerate the speed to which these waste products are transitioned back to crude petroleum.

But pulling petroleum out of the ground is a challenging business. Humans leverage advanced petroleum engineering technologies to extract these hydrocarbons. Imagine how much complicated engineering and drilling would be required to replace tons of plastic garbage materials back where they came from.

It would be next to impossible, and absolutely unaffordable. Its not going to happen.

Great pacific garbage patch. Source: B.parsons.edu.

Analogous to setting up for a party, when you set up and get ready for a party, you go buy food and drinks for your guests, setup decorations, plan games and activities, send out invitations and logistic information, etc.

Preparing for a party is fun and requires a bit of planning and effort.

After the party is over, however, there is a similar amount of un-fun effort required to clean up. There are dirty dishes and trash to be cleaned and disposed of. There may be spilled drinks on carpet or furniture, and you have to use something like Resolve carpet cleaner to restore them to their original condition.

The work required to clean up after the party is significantly more difficult than setting up for the party.

The problem with pollution in our world is that there is no one cleaning up after the party. And understandably so. Its a difficult, challenging, dirty, and expensive task.

Besides that, there’s no incentive to do so. Humans don’t want to clean up after other people all the time, yet everyone knows that all of us contribute to pollution.

This predicament is called the “Tragedy of the Commons”.

When you have a party at your house, you live in the direct vicinity of the mess that is left after a party. In the environment and world, however, people are able to artificially remove themselves from the mess created by society (aka pollution).

Although we have some vague perception that the great pacific garbage patch exists, because we don’t encounter it day to day as individuals, we are able to go on living our lives without feeling too bad about it.

Despite the complex, energy intensive manufacturing processes that are performed to make gasoline for cars and plastic goods, the vehicle exhaust, garbage and microplastics that enter our atmosphere and ocean have nowhere to go except accumulate in the environment.

Humans seem to have accepted the fact that these are are just left there.

But this is changing in some areas.

Some states like California and Hawaii have taken measures to prevent new plastic and garbage from entering the environment. Some popular prevention measures include smog inspections, no plastic grocery bags, no plastic straws, etc.

Prevention is good, but cleaning up is still needed.

We need an efficient and scalable way to clean up the Earth.

In order to relieve humans of the burden, perhaps we can leverage machines to take on the majority of plastic and garbage collection tasks associated with removing pollution from the environment.

There is good news. Humans have started doing this already. In Xi’an, China, there is a machine the size of a skyscraper whose sole purpose is to filter and purify the air.

Xi’an tower in China [3]

Its extremely exciting to see humans embarking on these types of developments. Although, as there is no natural incentive to build these (due to the Tragedy of the Commons), perhaps governments can create artificial incentives, offering contracts to engineering and development contractors to build similar skyscrapers.

But how about plastic waste? How might we begin to remove plastics from the environment?

To make any meaningful change, we must start somewhere. China began with the noble mission of reducing air pollution, and has built air filtration skyscrapers.

To focus on removing plastics from the environment, targeting the ocean is a great place to start.

Cleaning the oceans with autonomous boats.

To remove pollution from the oceans, we need solar-powered autonomous boats whose sole purpose is dragging filtration systems through the ocean, and collecting plastic pollution.

Identifying plastic material from organic material and avoiding biological life will be important. Perhaps some sort of artificial intelligence image recognition could help identify plastic waste in the water.

Where to begin? Logically, the best places to deploy these robots are in near the areas of primary pollution – harbors, water ways, sewage outputs, etc.

An inland stretch of the Ala Wai canal.

Take the Ala Wai boat harbor in Honolulu, Hawaii.

The harbor connects the ocean to the Ala Wai canal.

A walk along the Ala Wai canal and a glance into the water will provide the onlooker with a glance at dirty water, old chairs, plastic bags, floating bottles, and more.

In a place as beautiful as Hawaii, its very sad to see any amount of garbage floating around.

And unfortunately, that water unfortunately carries bacteria, sewage, garbage, and more into the gorgeous turquoise waters surrounding the island of Oahu, and is ultimately dispersed across the entire world.

Remember, all the oceans are connected… despite different areas having different names, there is truly only one ocean on Earth. I saw a comedian on Instagram talk about the fact that there is truly only 1 ocean, and it made me think of this.

In terms of next steps, we need to find someone to build the robotic floating garbage collectors. It won’t be an easy task, but it is 100% possible.

Using an autonomy infrastructure tool such as Applied Intuition might help with the development of the software.

How much water can be filtered?

We should take a small area of the global water system like the Ala Wai canal, and deploy robotic cleaning ships here as a test. By measuring length * width we can calculate surface area, then performing various depth-measurements, we can take average depth and use this to calculate an estimate of total volume.

The throughput of each robotic boats will have an estimate of gallons per hour, or gallons per day etc.

In a perfect system, the boats will charge via some sort of docking station, or even run on solar power. They will need to be incredibly energy efficient, with no need to propel themselves too fast, they can remain largely stationary.

These may then be scaled up to larger ships that cruise across the ocean autonomously collecting garbage from the great pacific garbage patch and more.

To remove plastic from the ocean, it is only logical that we have water-filtration boats.

Final Thoughts

The white-paper presented above simply contains ideas. I am not doing these, I am just an idea maker.

And if you’ve made it to the end of this post, good news! These robots already do exist.

But if they already exist, why is there still so much plastic in the ocean? How much of an impact to these autonomous boats actually have on the reduction of plastic in the ocean?

And then of course a few follow up questions inevitably arise:

After collecting a large amount of the garbage, the important question then becomes what do we do with the plastic? It will be great to remove it from the ocean, but where do we put it as a final resting place?

Follow the Future of Tech email newsletter (below), which is free and focuses on exploring emerging technology.

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

Sources:

  1. Pelacase.com
  2. Where does crude oil come from
  3. Xi’an tower in China
  4. Microplastics

Make Your Own Paintings with Artificial Intelligence

Artificial intelligence has many applications – artwork is just one of them.

In the future, will artists might rely on artificial intelligence tools in the creation process?

With an easy to use text-to-image generation process, anyone in the world can create original masterpieces regardless of technical know how or skill level.

You don’t even have to know what artificial intelligence is to use tools like this. It just goes to show you the exciting power that AI is bringing to our lives.

All you have to do is type a few words, and optionally upload a starting image.

Here is a link to the website: https://creator.nightcafe.studio/create

The website itself is actually fun to use as well – it includes a list of top posts, so you can see which creations have been most popular and most liked by others in the community.

Currently, the most liked post is a picture that resembles a strawberry grenade:

Strawberry hand grenade just started to explode
created by Alex_Heart_Sun

The process of creating the artworks is quite satisfying. It allows you to express just a bit of creativity – coming up with words that might fit together to create something unique, finding a starting point image on Google, and then letting artificial intelligence do the rest.

The exceptional levels of complexity that can arise from such simplistic inputs is quite astonishing.

Write simple words, such as “A mess yet magical” and you might end up with something like the below:

Created by @celeste

To be honest, this isn’t the first website that allows you to create art with artificial intelligence, but it is one of the most fun, in my opinion.

Say what you will about robots taking over, or artificial intelligence tools replacing human labor in the workforce – the company’s website states that AI is here “to make art creation accessible to the masses”.

And to be frank, the creation process couldn’t be simpler or more intuitive – literally anyone can create something.

Here’s one below that I made, and even added a saccharine title.

This website uses artificial intelligence to enable people to create artwork without any skill, practice, or knowledge.
“Utopian Scarface 1 Million Years Into the Future” by Espresso Insight

The ability to build applications that leverage AI as features yet can be used by the layperson even if they are non-technical means that the adoption of AI might end up happening faster that you might expect.

One can only imagine what artificial intelligence generated music might be like as AI applications expand. I can’t wait for someone to build a way to create your own music using a process and technique that is as simple and easy as this one.

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

Tesla AI day 2021 Recap and Takeaways

The purpose of Tesla AI Day is to get the world excited about what Tesla is doing in artificial intelligence beyond cars.

AI day is also a recruiting event for prospective engineers as the company ramps up hiring.

Key Takeaways from Tesla AI Day 2021

Source: Tesla AI day

Make useful AI that people love, and is unequivocally good. – Elon Musk

  • Vertical integration is a common theme in the presentation. For both software, hardware, neural net training, and more. This means that Tesla builds and designs a large percentage of their technology in house.
  • The company is able to auto label data sets as well as create simulation data sets with unlimited scenarios for training the neural network.
  • DOJO is Tesla’s supercomputer designed for one purpose – training neural networks. It will be in use and available next year.
  • The neural net architecture resembles the visual cortex of an animal.
  • They will build a humanoid robot (see Tesla Bot at right)
  • TLDR: skynet is born? Hopefully not. Although Elon said that human-level superintelligence is certainly possible, both the car and the Tesla Bot are examples of building “narrow AI” to avoid AI being misaligned with humans.

FSD beta version 9

FSD (Full-self-driving) is the autonomous system that is deployed to all cars, which customers can purchase for around $10,000.

The often debated fact that Tesla does not use LIDAR, as do many other autonomy oriented companies like GM cruise or Google Waymo, means that its cars use only cameras to gather data about the surroundings and navigate the world. Although the company mentioned plans to upgrade the cameras, the current cameras are still more than good enough.

The philosophy behind this decision is that roads were built for human eyes to see and navigate. Therefore, the cars should be able to gather sufficient data to navigate autonomously using only cameras.

Elon jokingly stated that because of this, someone could technically wear a T-shirt with a stop sign on it, and the car would stop. But ultimately, the company seems confident that cameras will be sufficient.

“It’s clearly headed to way better than human. Without a question.” – Elon Musk

Note: Tesla cars are not yet fully autonomous. Drivers still need to keep their attention and focus on the road at all times. [1]

Tesla still has yet to reach High Driving Automation level of autonomy (known as Level 4 Autonomy)

FSD driver-assist benefits:

  • Navigate on Autopilot
  • Auto Lane Change
  • AutoPark
  • Summon
  • Traffic Light and Stop Control

Neural Net Architecture

There are 8 cameras surrounding the video that capture images of the real world. Tesla’s system uses these images to create a 3D reconstruction of the scene in “vector space”.

Using these images and vector space rendering, the system makes predictions about what the car may encounter a few moments into the future, allowing the car to drive itself safely and without running into anything.

As the neural network improves as it is trained on more and more data, Tesla is slowly building a brain-like neural net that resembles the visual cortex of an animal.

The presenters mentioned that everything Tesla is building is fundamentally country agnostic. Although they are optimizing the neural net models for the US at this point, they will be able to extrapolate to other countries as well in the future.

The ability to plan allows the car to predict and make changes about what other cars are doing on the road in real time.

Predictive and planning themed capabilities aside, the upper limit of the neural network has enough power to remember all of the roads and highways on planet Earth.

The presentation spent a significant amount of time diving into the specifics of Tesla’s Neural Net Architecture. For specifics, watch the replay of the 2021 AI day livestream.

Source: Tesla

Training Neural Networks – Data Required

Every time a human driver gets inside a Tesla, they are helping to train the neural network. Although this may make an incremental improvement, this is not enough training data.

These networks have hundreds of millions of parameters – it is incredibly important to get as many data sets as possible to create 3D renderings in vector space.

Millions of labels are needed, and each piece of data is essentially just a small video clip. Associated with each clip, you have the actual image/video data, odometer information, GPS coordinates, and more.

Tesla needs millions of vector space data sets to train these neural networks. In the spirit of vertical integration, there was formerly a team at Tesla that tediously labeled all of that data. But manual labeling proved to be too slow – there is a better way.

Auto Labeling Data Sets

Tesla developed an auto labeling system, allowing them to generate extremely large training data sets much faster for training the neural network. The auto labeling mechanism is extremely important.

“Without auto labeling, we would not be able to solve the self driving problem.” – Elon Musk

Simulations

In addition to real-world data sets from camera footage, Tesla also is creating simulations of traffic scenarios.

It is like a video game, where Tesla Autopilot is the player. Simulation is helpful when data is difficult to source, difficult to label, or is in a closed loop.

Algorithms are able to create the simulation scenarios. These algorithms analyze where the system is failing, and then create more data around the failure points to allow the neural network to learn, improve, and handle those scenarios better in the future.

Elon specifically discouraged the use of machine learning because it is extremely difficult, and largely not the right solution for most use cases.

Project Dojo, Tesla’s Supercomputer

Dojo is the name of the neural network training system. Like a training Dojo.

Given all the data and simulations required, there is a demand for speed and capacity in AI neural network training. This is where Dojo comes in.

Dojo

Currently, it is difficult to scale up bandwidth and reduce latencies, because processors have not been traditionally designed for training neural nets.

This is why Tesla invented the DPU.

DPU – Dojo processing unit. Whereas CPUs and GPUs are not designed to train neural networks, the DPU is designed to train neural networks.

The goal is to achieve the best possible AI training performance, supporting larger complex models while being power efficient and cost effective. Elon said it will be available next year.

This effectively enhances the AI software system, improving FSD.

Dojo leverages a distributed compute architecture.

It is apparently capable of an exaFLOP, which mean is can do a super high number of calculations per second, way more than your average computer. It is a supercomputer, after all…

Tesla will also make Dojo available to other companies that want to train their own neural networks, effectively building a platform for improving neural networks. This feels like an optimum opportunity to apply the “as-a-service” business model to the world of artificial intelligence and neural network training. By licensing out the use of Dojo, Tesla may be able to create yet another revenue stream for the company.

Thay have reportedly innovated in these chips in a way that means there are no roadblocks to extremely high bandwidth.

The software stack is completely vertically integrated. They build everything in house.

Source: Tesla

As the Dojo computer and neural network data sets improve the neural network, it is likely that the company will deploy the improved brain-like software upgrades via their over-the-air software updates.

Hardware and Computer Chips

One of the biggest goals is minimize latency, and maximize frame rate. These metrics may be familiar to you if you are involved in video games & graphics.

There is a computer in the car that runs the neural network that has been trained by the massive data sets discussed above.

Allegedly, the computer chip for Tesla’s Full Self-Driving system are produced by Samsung. [1]

The importance of computer chips to Tesla cannot be overstated. Various computer chips are used in all areas of the vehicle – even in the typically non-tech intensive parts of a car: computerized airbags, seat belts, doors and door handles, etc.

Given the reliance on them, the computer chip and semiconductor shortage is an issue across the globe is certainly a hurdle to resolve. [1]

Tesla Bot

Tesla Bot – known endearingly by Elon as “Optimus subprime” – a 5 foot 8 inch, 125 pound humanoid robot.

Given that the Tesla car is already essentially a robot, the Tesla Bot will simply use all the same technologies, in a device with a shape like that of a human.

r/teslainvestorsclub - Tesla Robot Screenshot
source: Tesla

It will make use of all the same tools that Tesla has in the car… such as 8 cameras, FSD computer, etc.

Elon was unfortunately reluctant to share any specific use cases, other than stating vaguely that it will do boring, repetitive, and dangerous tasks that humans do not want to do.

There are still many unknowns. Will the Tesla Bot have features similar to Siri or Amazon Alexa / Echo?

In addition to being a large automaker, Tesla is showing that they are very much a robotics, artificial intelligence, and software company.

Disclaimer: TSLA shareholder

Sources and references

  1. Criticism of Tesla (wikipedia article)
  2. Tesla.com AI day

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

How to see tech trends before everyone else

“Be careful whose advice you buy, but be patient with those who supply it.” -Mary Schmich

To understand the future (in any subject) and keep up with latest in emerging technology, there is a broad strategy that is immensely beneficial:

Follow “Who” is building and funding emerging tech.

Follow and learn from where smart people in are investing their time, effort, energy, money, and other resources to have an impact on these technologies becoming real.

Why? Because those most knowledgeable people in a field often have early access to data and information to use in their endeavors. Keeping track of what smart people are doing allows you to benefit from their information access.

Although information feels more accessible than ever, top researchers know about ground-breaking studies before everyone else. Breakthrough research papers are often not widely discussed and are missed by the headlines. Data also can take time before being published.

Venture Capitalists fund companies that no one has heard of; they have perspectives and hypothesis that most of us have not considered. They seek the wisdom of experts, and use that to make investment decisions.

I’m not an expert but I try to really know what the experts think and keep up to date as the experts change their mind. – Tim Urban, talking about AI.

Identify “What” is new and obscure.

Mark Andreesen calls it the ‘What do the nerds do on nights and weekends?’ test.

Said in a similar way: “what the smartest people in the world do on the weekends is what everyone else will do during the week in 10 years.”

What are the nerds talking about and working on that the greater population of people is not even aware of? Video games is a great example. Most people never would have thought that being a professional gamer was a viable way to and earn money by livestreaming on platforms like Twitch. In fact, many people in 2021 probably still don’t even realize this.

Identifying influential and intelligent people who’s ideas are worth spreading is somewhat subjective. There isn’t a sure way to find the brilliant minds of a given area, but a few things to look for include:

  • Track record of success.
    • Which people have founded of been an early employee at successful companies? Which angel investors have had successful exits with their portfolio companies?
  • Network of other influential people in their circle.
  • Contrarian, not dedicated to mainstream ideas and conventional wisdom.

Go against consensus:

“What you listen to and who you listen to is what you become.” – Gary Vee, recent post on LinkedIn.

In his book Zero to One as well as his talks on YouTube, Peter Thiel shares his favorite interview question: “What important truth do very few people agree with you on?”.

This is not an easy question to answer.

To invest successfully, being able to think from contrarian viewpoints is extremely important.

Holding a hypothesis about a business or about the world that is against the consensus of the general population creates the risk of being wrong. However, by applying the scientific method, a founder can test whether or not this hypothesis is indeed true.

Holding contrarian viewpoints means betting on something that is underrated and undervalued. It means people must disagree with you today, and agree with you in the future.

Although tough to stomach, having people disagree with your hypothesis in the present is a pre-requisite to successful investment thesis.

When done well, spending time and energy on contrarian ideas resembles the “buy low, sell high” approach in investing. When most people regard something as worthless or irrelevant, it is affordable and easy to access. The thing is, most people don’t care about being involved with something that isn’t worth anything today.

A recent example is cryptocurrency. In 2009 or 2010, conversations about cryptocurrency were probably generally ignored. People working on crypto had a unique hypothesis about the future of this technology, and spent time building projects. Vitalik Buterin was building the Ethereum blockchain before most of the world even knew what cryptocurrency was. At the time, cryptocurrency was highly undervalued. Because Vitalik believed there was value in working on building projects in this arena, he spent massive time and energy creating a platform and accumulating skills and experience. Now that the rest of the population is realizing the value of cryptocurrency, Vitalik’s project Ethereum has grown exponentially in value.

This type of growth would not have been possible if Vitalik did not initially pursue and idea that most people would have considered worthless.

Often, the smartest people in the world know things that you don’t. They sit on the boards of highly technical and innovative companies. Their circles include influential people in business internationally.

How and Where to find new ideas?

Its easier said than done, and there really isn’t a single way to discover emerging trends.

Start with thinking about commonly held beliefs and accepted truths, then flipping them around to find areas where the majority may be wrong.

Follow people on Twitter. Being able to read the real-time thoughts of someone that has figured out how to start and launch successful tech companies might give you ideas about how you can do the same. Paul Graham, who has written timeless essays that dispense wisdom for technology founders, Tweets quite often. His essays on business, software, and startups are second to none.

Read subreddits. Despite the large amount of trolls, misinformation, and time-wasting content on this website, Reddit is a great way to obtain a general understanding of a topic by reading forum-based threads with what everyone is saying about a topic. Find the small communities with a dedicated following, and become a contributor.

Listen to interviews and podcast appearances with noteworthy people.

Set specific Google Alerts. For example, setting a Google Alert for Gwynn Shotwell, COO of SpaceX, might help you stay up to date with the excitement in the space travel industry such as rapid point-to-point rocket travel. Some of the greatest business minds don’t have a huge online presence. Setting a Google Alert for the words “Warren Buffett” will help you stay ahead of any big moves that Berkshire Hathaway has made, for example.

Following the Future of Tech letter can help you identify macro trends and insights on crypto, biotech, space travel, technology, and the future.

Exploding Topics may help you keep track of where there is greater interest in specific Google searches.

Every piece of knowledge acquired is just one data point – not everything should be acted upon. Accumulation of knowledge and insights comes with slow and gradual realization of how much you don’t know. I will leave you with this: As Socrates said, “I know that I know nothing”.

Reach out to me on Twitter @espressoinsight and let me know your thoughts.

Progress of Autonomous Vehicles Over Time

Driving a car is the MOST dangerous thing we do every day – 40,000 people die in cars each year.

Humans are really bad drivers.

To get a driver’s license, you’re given a 25 question multiple choice test at the DMV and then get behind the wheel.

Humans don’t work towards being excellent drivers the way they train for a marathon, study for medical school, or practice an instrument.

Poor driving is amplified by distraction – checking phone notifications, texts, social feed, etc. How much can the average person be expected to maintain focus with their eyes off the road?

Autonomous vehicles could save tens of thousands of human lives per year.

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

The image below shows a graph of the advancement and sophistication of autonomous vehicles as time and technology moves forward.

Progress of AV’s hits an inflection point where quick progress displays itself as a steep learning curve, slowly approaching an asymptotic limit of perfectly flawless autonomous driving.

As advancement of AV’s approaches this limit, theoretically, system will never be perfect, but it will surely become good enough that the chances of a collision by an autonomous vehicle with another object is practically zero.

This is the point at which vehicles are 99.9999…..% safe, highlighted as the green line. As systems continue to be developed, we expect the “march of nines” above to approach closer and closer to 100%.

There is a big difference between a small fender bender and a fatal collision.

Before self driving technology progress reaches a point where it is statistically unlikely enough that a collision will ever occur on a road, an autonomous system will need to reach a point where the risk of fatal collision is practically zero.

This may be mitigated by incorporating risk avoidance technologies such as slowing down in high-traffic areas, or even designing fleets of cars that are able to communicate from one to the other.

Although autonomy progress has been drawn as a sigmoidal curve above, there may be an argument that the actual progress would look more like a logarithmic curve, if there were no time of slow progress before the inflection point.

In either case, self driving cars continue get better. Some companies have already built autonomous vehicles that feel safer than human driven cars. But these systems are still not entirely ready for the roads.

Humans do not accept AVs unless they are 100% safe, even if they are safer than human drivers. It’s not that humans feel nonchalantly towards the 40,000 people that dies in car crashes each year, it more that humans have extremely high expectations of technology.

Despite the fact that our cell phones send information through the air, we get frustrated when internet speeds are slow and it takes us a few seconds longer to get an answer from Google. A lack of complacency isn’t exactly a negative thing, it promotes technological advancement.

Any non-zero number of self driving car crash fatalities is absolutely unacceptable. The infamous av-Uber crash in Phoenix, Arizona was a tragic nightmare. Ultimately, autonomous vehicle technology must be perfect before humans will accept it.

Does this highlight some principle that is distinct to human mentality? Here are two examples:

Example 1: Humans have irrational fears

I have a few close friends who choose not to surf or go in the ocean because they are afraid of sharks. This is socially acceptable. But I have never met a single person that avoids riding in an automobile out of fear.

To be fair, transportation is pretty much mandatory for a lot of things in life, whereas going swimming in the ocean is trivial and not a requirement.

Why does a fear of sharks continue to be so disproportionally high among humans, compared to driving, which is orders of magnitude more dangerous?

Example #2: Humans are borderline incompetent at most things…

And our only hope is to create tools to help us accomplish the things we need to do. Expecting a human to drive a car is like expecting someone to prepare and serve a full course dinner without any of the tools that exist in a kitchen. While it is surely possible that someone might be able to build a fire without matches and maintain a consistent temperature with which to cook their food, it is extremely likely that they burn the food and making an awful tasting meal. Without tools that help us cook, we’re incompetent. With tools like utensils and appliances, most people still have a hard time successfully preparing a meal. Even with the most advanced stovetop and cookware, cooking is difficult and takes just the right amount of time and patience to get right.

Transportation is no different. Humans were incompetent at all forms of transportation before railroads and the combustion engine. With engines and automobiles, we’re still awful drivers.

Lewis Hamilton (Mercedes) - GP of Spain 2019
source: Eurosport

A car is just a tool. It is a solution to the slow transportation problem.

Some people drive cars for fun. Most people drive cars because they have the human centric need to move around from one place to another at their free will. Autonomous vehicles will make transportation more safe and effective.

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

Source:

  1. According to the National Safety Council, over 40,000 people were killed in vehicle-related incidents in 2018. During the previous 3 years, there were more than 120,000 total fatalities.

How to Create Art with Artificial Intelligence for Free

Artificial intelligence image synthesis is now able to produce realistic images from simple, hand-drawn shapes.

Nvidia’s Artificial Intelligence tool to create Artwork

The AI based artwork tool is built by Nvidia. The tool is called GauGAN, and is available for anyone to use online for free on Nvidia’s AI playground.

Although there are several implications of this technology, but the free Beta version of the application is fun and allows non-artsy people (like me) to unleash our creative side.

AI-generated painting by Ryan via Nvidia GauGAN.

How does GauGAN work?

GauGAN works using a machine learning system that’s known as a Generative Adversarial Network, which uses a statistical approach that allows two agents, known as the generator and the discriminator, to engage in an optimization based competition.

This process is a type of unsupervised learning for AI.

This technique ultimately creates more accurate, high-resolution renderings of photos from hand drawn shapes.

As a user, you apply filters and select various labels from which the system will apply a style transfer algorithm to modify the color composition of basic single-colored areas and turn them into more photorealistic scenes.

How will AI Art Creation tools be used?

For digital graphic related disciplines, GauGAN is already being used by artists to build rapid prototypes of scenes and conceptual designs.

The time saving potential that tools like GauGAN will enable are huge.

For the world of video game development, the same is true. As video game creators like Fortnite embark on advancing their in-game creative mode, similar tools could be added to help players make quick mock-ups of maps and environments.

Get the Future of Technology letter each month. Sign up below.

Processing…
Success! You're on the list.

sources:

http://nvidia-research-mingyuliu.com/gaugan/

Nvidia Ai playground: https://www.nvidia.com/en-us/research/ai-playground/

CONWAY GAME OF LIFE cellular automata

game of life cellular automata rules

1. Each square is a cell.

2. Cells Have two possible states: either on or off, Alive or Dead

3. Cells fluctuate between alive and dead based on their interaction with the 8 neighboring cells.

4. Cells are born if there are exactly 3 live neighboring cells.

5. Cells die if they have fewer than 2 live neighbors or more than 3 live neighbors.

6. Cells survive if they have exactly 2 or 3 live neighbors.

To try the game of life for yourself:

best simulation: https://copy.sh/life/

Other game of life simulations:
https://www.dcode.fr/game-of-life

Those small but intricate moving squares is a programmatic simulation known as cellular automata. This is called the Game of Life.


The game of life is a computer simulation created by this guy John Conway. There are infinitely many combinations for how the game grows and progresses – in fact, its not really a game at all. It’s technically a zero player game, which means that the game’s evolution is determined by its initial state with no further input.

Fungi Game of life

**So I’m throwing out the challenge as mentioned in the YouTube video.. I can’t figure out how to make the game exhibit the fungi behavior again. I’m sure someone out there will be able to identify the correct settings. (Pls let me know)

Although the game maintains such simple rules, unimaginable levels of complexity can result.

Gliders are a crazy phenomena in the game. They move forever – infinitely in one direction.

These tiny blocks move in unpredictable patterns that can only be understood by letting the simulation play out. But every process in the simulation follows the rules of the system.


This is quite similar to the laws of physics that govern the way particles interact at an atomic level within our universe.

Could it be that these incredibly simple rules seem almost analogous to the rules governing how protons, neutrons, and electrons interact in our own universe?

Much like in the game, how small building blocks create larger multi-functioning entities that can achieve different weird tasks, our universe has atoms which create molecules which combine to form life as we know it.

How will 5G impact Gaming?

Now more than a buzzword, 5G will revolutionize mobile networks. This article covers 5G, discussing how it may impact gaming.

  • What is 5G?
  • How will it make gaming better?
  • What will this make possible?
  • How will 5G impact data plans?
  • When will it be here?

What is 5G?

Not to be confused with 5Ghz wifi, 5G cellular networks will open up entirely new possibilities that weren’t feasible on slower networks.

5G is the next step up in cellular network technology (there was 3G, then 4G LTE, and now 5G). 5G mobile internet is rumored to be 10 times faster than LTE, 5G increases wireless broadband speeds for smartphones and other devices, thus improving data transfer on cellular and mobile networks.

Although dramatically impactful in many disciplines (medicine: surgery performed remotely, automobiles: self driving cars, IOT: smart cities), 5G will also enable and improve many aspects of gaming.

How will 5G impact gaming?

5G will make gaming – specifically mobile gaming – better. This will include:

1. Immediate responsiveness:

This means lower latency. Latency is the time it takes to transmit data from your device to the network and back again. This increases the speed with which a command is registered on your screen. For gaming, a lower latency means that gamers will more quickly be able to make decisions.

2. Better mobile graphics:

The ability to transmit and receive larger amounts of data will allow mobile frame rate to increase (as of January 2020, Fortnite on console can handle up to 60 FPS whereas Fortnite on PC and iPad up to 120 FPS, for example). With a higher frame rate, graphics will be more crisp and player movements will be sharper and easier to calculate.

3. Faster downloads:

5G will notably increase the download speed for games and other apps to your phone. Today, downloading mobile games (many of which are well over 1GB) requires gamers to connect to wifi and plug in their phone to prevent battery drainage. 5G will allow games of this size to be downloaded no problem. Data plans will likely be larger, so people will be more apt to download these games on the go. This will lower the barrier to entry to play all types of video games, increasing its popularity broadly, from mobile and casual gaming to esports as well.

4. Better reliability

Because 5G will provide extremely stable connections, gamers will not experience game and internet connections to drop off or remain unsteady. As more people have stable and reliable internet connections, mass multiplayer online mobile games will continue to grow, expanding the social aspects of many games such as Fortnite, while providing space for the birth of mobile-first multiplayer online games.

5. The end of lagging?

Lagging gets worse when a game’s network reaches peak player volume. Since 5G can handle a higher network throughput than 4G, game servers will manage peak hour data usage more efficiently. Since 5G will reportedly make games 1000 times faster, gamers can expect little to no lagging and more consistent high quality gameplay.

However, game companies will inevitably develop more complex and graphic intensive games. Especially with the continued growth of virtual reality gaming, the processing power required to run video games can be expected to increase.

While 5G will allow game developers to to do more with future video games, we can expect game developers to to continue to push the envelope of technology, eventually reaching the limits of 5G.

What will 5G make possible?

1. Cloud Gaming

Cloud gaming should allow you to play high end, graphics intensive online multiplayer games on your mobile device with low (if any) latency and extremely high quality, resolution, and response time.

You won’t be able to tell that you’re not playing on a mobile device. Look for the following cloud gaming systems to exist soon:

  • Playstation Now
  • Google Stadia
  • Microsoft Xcloud

2. VR Streaming

Virtual reality entertainment has higher bandwidth requirements for a couple reasons:

  1. two images need to be streamed for VR, one to the left eye and one to the right eye.
  2. both of these need to be high-resolution to deliver high-quality experiences.
  3. low frame rate is one of the chief reasons that some VR players get headaches or have poor quality experiences in VR gaming.

The ability for 5G to handle the higher data receipt and transmission will make VR streaming of movies and games much smoother.

3. Future of mobile devices

5G will enable a new breed of handheld gaming consoles – 5G shells that transmit and receive high quality graphics via cloud computing. With the power of 5G, processing power local to device is not practical.

In the future, the purposes served by multiple devices (X-Box, PC, mobile, etc.) may combine into one one device optimized for transmitting and receiving data via the cloud.

With this data being processed non-locally, there is no need to purchase physical copies of games and other software, since they can be easily procured via the internet.

How will 5G affect data plans?

For consumers, it is hard to know if 5G will mean higher prices or not. Given that 5G is an enabler of extremely high download capacity, unlimited data plans will likely be required to take advantage of 5G.

The good news is that some companies such as TMobile have pledged to keep prices the same as 4g.

Data caps may still be a factor as they are today, but ultimately 5G will make the ability to transmit and receive data more abundant.

When will 5G be here?

5G is already here, its just not widely distributed yet. Some cellular plans such as T-Mobile and AT&T already have 5G out. We recommend contacting your provider directly to inquire more about 5G and what it would take to get access.

Processing…
Success! You're on the list.

How to use Virtual Reality in Human Resources 2020

Your Free MONTHLY newsletter,
the Emerging Tech Top 3.

Processing…
Success! You're on the list.

Using virtual reality technology in an organization may seem futuristic. But VR is quite well established; current technology systems exist and are readily available for implementation. In the virtual world of 2020, businesses have an opportunity to leverage VR to serve a wide array of functions.

But with an emerging technology like VR, where should a business start? It turns out HR is a safe place to establish a pilot virtual reality strategy.

In this post, we’re going to discuss the applications of virtual reality to human resources. We will focus on both benefits and potential return on investment of implementing such systems.

“Employees are a company’s most valuable asset.”

To stay modern, HR leaders have a responsibility to invest in the best tools for their employees.

From the initial job application up to an employee’s daily experience post-hire, VR will change the employee journey.

If HR leaders can learn anything from their own workforce, its that employees crave a modernized work experience. A big part of this is making sure your employees have the latest and greatest tools. Its likely that employee adoption of VR will be quite rapid, since its intuitive and easy to use. Technology companies like STRIVR are building VR tools oriented around the business user.

With a strategic up front investment in time and resources, companies may see long term benefits including:

  • reduce travel expenses
  • reduce costs
  • save time
  • enable employee flexibility
  • enhance productivity
  • increase efficiency,

Many human resources departments are taking steps in VR. Let’s look at a few HR processes where you have he potential to leverage VR today:

VR in HR in 2020

Interviews & Candidate Screening:

the benefits: By leveraging VR to screen candidates and conduct interviews remotely, firms will benefit by minimizing travel expenses without sacrificing sense of personal connection.

During the candidate screening process, VR will foster a collaborative and interactive environment for prospective employees to meet with recruiters and hiring managers. VR may allow candidates to work out problems in front of the interviewer in ways that we cannot yet imagine. Additionally, group interview sessions could be done remotely while maintaining the personal and collaborative feel that an in-person interview would have.
As opposed to phone or email exchange, VR may make the subtleties of person-to-person interaction more fluid and natural, taking body language and other non-verbal communication into context.
Although VR may seem rare today, comparative technology to conduct video interviews is being used by many companies. Software like HireVue makes this possible by organizing video interviews with candidates. Virtual reality would only enhance the immersive experience of video, which is already used by candidates and hiring managers alike.
Similar to the video interview tools out on the market, a virtual reality system would be integrated within the applicant tracking system or core HRIS with minimal burden on IT teams.

Pre-employment experiences:

the benefits: By providing candidates with pre-hire tours, companies will ensure they are portraying the right brand message.

To attract the best people to join their teams, companies must differentiate themselves. Virtual reality provides an opportunity to prove that they are modern and high tech.
Virtual reality is the closest thing you can get to experiencing something, without actually doing it. A virtual tour of the office could give the employee a better sense for the workplace culture, and what it will actually be like working for a particular firm. In addition to info sessions and career fairs, virtual office tours could be used as a method of familiarizing candidates with the company culture. Come companies offer online 3-D tours of office amenities and training areas. The job market is hot. The unemployment rate is at an all time low – 3.6% as of May 2019. The best applicants are selective and often receive multiple job offers. Aside from the common offerings (competitive pay, benefits, development, flexibility, etc.) employers may find it valuable to provide a virtual experience of the actual day to day job.  Rather than just hearing about the job opportunity, candidates could actually feel themselves doing the job with VR.

Employee Onboarding:

the benefits: speed up employee time to productivity.

How long does it take your new hires to reach proficiency in their jobs? How much productivity is lost between the time from when they accept the offer letter to the time that they are actually doing productive work for your business? How long is your onboarding process? These are just a few questions HR teams ask themselves, especially when companies hire large cohorts of people. Optimizing the post-hire processes can help ensure quick time to productivity. Your CFO will be glad to hear that your employees can get started on their actual jobs more quickly, ultimately contributing to your company’s profitability and bottom line. Developing a type of self-paced e-learning with VR could simply supplement what companies are currently doing with their learning management system.
Includes the employee training process, could be linked within your Learning Management System.

With the self-paced and module based training courses, VR in addition to videos may help employees to learn more quickly. Which brings us to our next topic of VR in HR disruption, learning management. More on that below.

Learning and Development:

the benefits: continuous, on-demand employee training.

Flight simulators already exist that offer pilots in training with the opportunity to perfect their skills before the danger of a live environment. Similarly, other types of employees, such as bridge construction workers that perform tasks on underwater bridge supports while scuba diving, experience potentially dangerous work situations. To ensure the employees are practiced, ready, and know what to expect, simulation of work environments could be developed such that they can better prepare themselves for the job. In this scenario, incorporating a mixed reality  experience may even make sense.

Workplace Gamification:

the benefits: by leveraging AR type tools to make work engaging and motivating, companies may increase employee productivity.

Here’s where we can use our imagination even more, and where understanding what types of things Video Game systems have already done will help us understand what’s actually possible. In a physical-labor intensive job in high school, I had the task of loading equipment and supplies from a warehouse into the back of large semi-truck trailers. The hours were long, the pay was good, and it was quite a workout to say the least. Imagine warehouse workers, like me, who are tasked with the repetitive task of loading trailer after trailer, could do so in a virtualized gaming world. Imagine the workers could wear a headset, and become immersed in a tetris-like environment where they are no longer simply loading boxes and supplies into a truck, but are playing a game in real-time with various graphics, information, and instructions being displayed to them during their entire workday. Incentives could be pushed in the form of in-game rewards or something similar to help them stay motivated. Beyond the work and the game itself, the game could serve as a platform fostering greater employee collaboration, feedback, and performance.

Health & Wellness:

the benefits: keep employees happy and productive by providing employee escapism & mini-vacations.

The “R” in VR. Reality. We shouldn’t take this word lightly. The immersiveness of VR often makes the user feel that their experience is truly real. As the saying goes, “perception is your own reality”. If someone has an experience, and barring logic of knowing that they are wearing a VR headset, it feels real to that person, is the impact it has on them not close to the impact that the real life experience would have?
All humans encounter and experience stress. Its a fact of life, and some stress is good. But having time to relax is also important. By providing a meditative like experience through Virtual Reality to break up the continuation of a stressful work environment, companies could foster employee wellness. By giving employees a relaxing mini-vacation at their desks, how much might employee productivity be increased?

Meetings:

the benefits: remote work more personal.

Similar to the case with interviews, VR tools could allow teams located in different areas to communicate and collaborate as if they were in the same room. The goal is to replicate the natural feeling of presence you have during a standard in-person meeting.

We’ve covered the potential ways that virtual reality could change and improve the employee experience.

Businesses leaders must consider the question of how they will establish an advantage over competition by staying ahead. Adopting virtual reality technology to supplement their HRIS might just be one way to do so.

As we’ve shown, employees have a tendency to embrace and consume the latest and greatest technology in the workplace. Improvements in monitor screens, ergonomic mice, automated standing desks, etc just make the employees day more comfortable and immersive.

Your Free MONTHLY newsletter,
the Emerging Tech Top 3.

Processing…
Success! You're on the list.

9 Productive Organizations Using Virtual Reality in Human Resources

In this post, we cover 9 different companies that are using virtual reality for human resources. We’ll discuss the benefits these companies have realized by implementing VR in HR and other parts of the enterprise, which include:

– reduce travel expenses
– reduce costs
– save time
– enable employee flexibility
– enhance productivity
– increase efficiency, and more.

We hope this post helps you decide if VR is right for your business.

Examples of Enterprise VR for HR:

This post will cover a few companies that have implemented virtual reality into their current processes and systems.

1. Deutsche Bahn

German Railway company Deutsche Bahn employed a large population of employees set to retire within a few years. The company was hiring aggressively, and needed a way to train all these new employees. As a railway company, equipment and machinery is heavy and expensive – there isn’t an easy way to bring it into the classroom. The company built 360-degree VR experiences with the help of a few 3rd party startups. “This lets technicians practice how to assemble switch locks and troubleshoot problems with switches”, stated the Deutsche Bahn website. The company also mentions that an augmented-reality system can be used to guide even their more experienced employees through complex repair processes to speed up procedures. In addition to training, the company sets up VR headsets at job fairs, trips, and interviews. This enables recruiters to present immersive experiences to attract prospective candidates.

2. Walmart

Walmart, which is the nation’s largest private employer, has installed Oculus Go headsets into their 4,600 U.S. stores. VR helps understand how an employee accomplishes a task in a virtual setting. Managers are able to gain insight into employee skills and understand how employees handle everyday scenarios, which include managing sections of the store or preparing for busy season. This helps determine who gets raises and promoted to management roles. Walmart’s goal is to reduce turnover as well as limit decision bias in hiring to increase diversity. STRIVR, company based in silicon valley, builds the VR simulations.

3. The British Army

VR helps soldiers of the British Army familiarize themselves with aspects of combat before going through actual training. VR is used for vehicle, flight, and battlefield simulation, medic training, and even a virtual boot camp. Leveraging VR allows the British Army to save money. VR is a cheaper way to train soldiers on certain processes before doing them in real life. Visualizing and going through proper procedures and techniques in a virtual environment minimizes the use of costly resources such as fuel and supplies. Avatars within the simulation are designed to display true facial features such that soldiers can recognize each other, allowing soldiers to function as a team. Data capture and analysis allow soldiers to review and improve their performance. The virtual training is developed by Bohemia Interactive Simulations (BiSim).

4. Kentucky Fried Chicken

KFC is using the technology to teach employees how to cook fried chicken. The company did so by partnering with Oculus, to build an escape-room themed game where employees learn how to cook chicken the hard way. Although the game has received mixed reviews, the description states that “this was the clearest way to communicate exactly what is expected when it comes to making his fried chicken.” Say what you will about the overlap of VR technology and cooking chicken – one thing is for sure – there are few better ways to satisfy your hunger than stopping by KFC.

5. Lowe’s Home Improvement

Lowe’s developed a VR application at their Lowe’s Innovation Labs to make home improvement projects simple and seamless for the customer. The virtual experience allowed customers to walk through and learn how to accomplish DIY projects, such as tiling a shower. According to Lowe’s, going through such a project in virtual reality helped people reach memory performance levels comparable to someone with more experience. It can be hypothesized that VR has a measurable impact on humans ability to learn. By giving inexperienced customers the confidence to take on a DIY project, Lowe’s stands to sell more products and increase revenue.

6. Hilton Worldwide

Hilton is using VR for virtual employee training. Hilton’s goal was to improve communication between customers and staff. To do so, they developed VR system that replicates real-life customer reactions to different scenarios. This helped employees develop the interpersonal skills needed to foster positive customer interactions. Beyond customer interaction, Hilton also piloted a conflict resolution program to help employees become more skilled at service recovery. One of the challenges of rolling out the service, it seems, is localization. Since Hilton has properties across the world, the VR must be catered to many different languages.

7. Samsung

Samsung needed to make training at their production more efficient and less costly. The employee experience during training includes a headset as well as a handheld controller that mimics a tool that allows the employee to work through the mock manufacturing processes. For Samsung, the bonus is that they actually make the hardware: phones to be viewed on VR headsets.

8. Volkswagon

VW uses the HTC Vive virtual reality system to assist with train 10,000 employees in the production and logistics teams in order to increase productivity and efficiency. With VR training, employees can learn at their own pace, and the company avoids costly travel expenses. VR is also scalable. So far, the company’s VR lessons include vehicle assembly, new team member training, and customer service.
In addition to employee training, VW uses VR technology for prototyping. The product team is able to construct virtual car parts that can be built in real life after being perfected. Building virtual prototypes has a few advantages over physical ones – often, virtual is faster, cheaper, and easier to tweak. It also allows designers to communicate and share ideas with engineering and others on the development team.

9. NBC

NBC broadcasted a number of hours of the Rio Summer Olympics as well as the PyeongChang Winter Olympics in VR. The events included opening and closing ceremonies, men’s basketball, gymnastics, track and field, beach volleyball, diving, boxing, alpine skiing, curling, snowboarding, skeleton, figure skating, short track, ski jumping, ice hockey, big air, and fencing. The NBC team has partnered with Intel and Samsung to broadcast these past events.

We hope you now have a general idea about the ways that a number of large, recognizable companies are investing in virtual reality technology specifically for human resources. All of these companies are realizing a positive impact in doing so. Is your team looking to implement VR or other emerging systems in your organization? The team at AbstractRealization.com would love to hear about it – let us know, here.