Artificial IntelligenceBlockchainBlogsCryptoFeatured

The Fourth Industrial Revolution – The Rise of the Autonomous Economy

The Fourth Industrial Revolution: The Rise Of The Autonomous Economy

To understand the present, one has to research the past. To see the future, one has to feel the momentum building in the present.

When examining the past, it becomes clear that advancements in technology have undoubtedly been the leading driver in the progression of human civilization. Just like the wheel and compass revolutionized previous generations, developments of the smartphone and the Internet have completely changed society today, making it hard to even imagine a world without them. While it’s easy to look back in history and identify the key breakthroughs, most people are unable to foresee the technological innovations of the future before they become fully embedded into everyday life. In fact, most new technologies are ridiculed in their beginning stages, with “experts” claiming they’re unachievable and unnecessary.

(Courtesy of The Nomads)

However, despite the doubt that stubbornly clouds the present, many believe that current technological trends are on the precipice of igniting a fourth industrial revolution; this time sparked by the rise of mass automation. While economies directed by humans are likely to never disappear, what is starting to happen is the formation of a parallel economy run entirely by machines. Similar to industrial revolutions of the past, the current one is coalescing around certain technological breakthroughs, specifically in the Internet of Things (IoT), Artificial Intelligence (AI), and Distributed Ledger Technology(DLT).

(Nobel Prize winning economist Paul Krugman was clearly wrong about the impact the Internet would have on society; source)

While the average person has little to no awareness of what’s coming, the trajectory of modern technology isn’t going unnoticed by everyone. Brian Arthur, an economist famous for developing the modern approach to increasing returns, has proposed a thesis to describe the phenomenon and coined it, “the autonomy economy.” Klaus Schwab, founder and executive chairman of the World Economic Forum, has echoed comparable sentiments and even wrote a book about it called “The Fourth Industrial Revolution.”

Before taking a closer look at current technological trends, it’s beneficial to study the impacts that the first three industrial revolutions had on society. Possessing historical knowledge can go a long way in helping one envision how the Fourth Industrial Revolution will impact the future.

The Industrial Revolutions of Past

The previous three industrial revolutions have all been driven by a series of separate, but interconnected technological innovations that vastly increased humanities ability to produce output, while greatly reducing the input needed to obtain it, whether through reduction in labor, time, or materials. These advancements not only remade society from an economic sense, but also reshaped the whole concept of how humans perceived their day-to-day lives.

The First Industrial Revolution:

From roughly 1750–1850, the First Industrial Revolution took place and was predominantly the result of humanity’s ability to harness two key energy sources, steam and coal. The main driver of the first industrial revolution was a succession of engineering breakthroughs in the steam engine, along with the discovery of a cheaper more abundant mineral, coal. The combination eventually gave rise to coal powered external combustion steam engines, capable of producing far more energy at a cheaper price than ever before. This new input lead to major transformations in manufacturing and was used to fuel radical changes in several industries, such as textiles, metal works (especially Iron), and transportation.

(Some of the major inventions of the First Industrial Revolution, made possible by the innovations of the steam engine; source)

Some of history’s most famous inventions were developed during this time period, like the cotton gin, a machine used to separate cotton fibers from their seeds, and the power loom, a machine used to weave cloths and tapestries. Other notable breakthroughs include the development of machine tools, the rediscovery of cement, the introduction of sheet glass, and the burning of coal to produce gaslight.

Prior to the First Industrial Revolution, most goods were made locally and the work of individual craftsmen, but after the commercialization of coal powered steam engines, large industries formed, able to produce products for a much wider consumer base. A foundational shift occurred in society from being a rural agrarian culture to the buildup of industrial cities centered around large manufacturing factories. The work force no longer was dominated by individual laborers, but instead was slowly being replaced by industries run by capitalists that employed the working class. Cities started to become the economic powerhouses of whole nations. The trend wouldn’t slow down either, as it wouldn’t be long before a second industrial revolution would take place, potentially even more impactful than the first.

The Second Industrial Revolution:

Also known as the Technological Revolution, the Second Industrial Revolution lasted from about 1870–1914 (the start of WW1) and can be best described as a mastering of the technology introduced in the First Industrial Revolution, mixed with two major breakthroughs of its own: the harnessing of two new energy sources: electricity and petroleum.

Thanks to more advanced developments in iron and steel production, machine parts began to be produced in bulk and standardized across industries, such as standard sizes for screws and metal bars. Intricate railway infrastructure opened across several advanced countries, as well as the development of the steam turbine engine, which revolutionized naval vessels. Essentially, society developed far superior transport routes for all the factory products that were being mass-produced. Markets really began to open up during this period due to increased speed of transportation and decreased price of machine-driven production.

(Railroad infrastructure in 1860 was far more advanced than just 30 years prior when there was almost no railroads in the U.S.; source)

The culminating outgrowth towards the end of the second industrial revolution has to be electricity and petroleum. Even the modern world of today is completely dependent on electricity and oil. Electrification is often seen as the biggest advancement of the 20th century because it gave society a cheap, abundant source of energy that would not only power factories and homes at any time of the day, but would lay the foundation for all the devices to come later. While electricity was vital, oil has been the most sought after commodity of the last century. It’s been the dominant fuel source for powering most transportation vehicles, whether it’s cars, airplanes, or farming equipment. It’s also given rise to a vast array of consumer products (plastic), fertilizers/chemicals, and medicines.

There were other major advancements during this time as well, such as in communication with inventions of the telegraph, telephone, and radio. Paper making machines also started to gain traction in the beginning of the 20th century, resulting in new abilities to spread knowledge, news, and literature across continents. Finally, developments in rubber production lead to the mass production of tires that aided in the inventions of bicycles, cars, and airplanes.

(Breaking down some of the key differences between the First and Second Industrial Revolutions)

It’s important to grasp how the First Industrial Revolution was the technological bang that started the concept of modern industrial economies, while the Second Industrial Revolution was the mastering of the technology, giving rise to modern cities filled with the first skyscrapers. With countries able to trade and communicate like never before, the world was entering into the beginning stages of its move towards globalization. The trend would only continue too, and eventually reach unprecedented levels starting in the last half of the 20th century. Society would experience a radical new technological bang: the Digital Revolution.

The Third Industrial Revolution:

Starting around the late 1950s up until the present day, the Third Industrial Revolution, also known as the Digital Revolution, has taken root in society and is mainly the culmination of a shift from mechanical and analogue electronic technology to digital electronics. The two major outgrowths have been digital computing and communication technology. The fast computation of computers, mixed with the interconnection of the Internet and satellite broadcasting, has created a digital architecture where information can be instantly shared all around the world by devices with far faster processing speeds than humans. It’s no wonder people refer to this time period as the Age of Information.

(The switch from digital to analog was rather quick since the year 2000)

The abundance of digital information is the result of a mastery of electricity and precision craftsmanship, which combines to birth ever-improving microprocessors, aka computer chips. From smart phones and HD television screen to high-end photography equipment and drones, computer chips are the backbone of all advanced electronics. Interestingly, all these technologies have consistently been replaced with better versions within a small period of time. The phone is a good example, going from the payphone, to the landline, to the cell phone, to the smartphone, and potentially becoming a biotechnology next.

Just like manufacturing innovations of the 1st and 2nd industrial revolutions lead to the construction of industrial cities using all the materials being produced, the electronic innovations of the 3rd and 4th industrial revolutions are leading to the construction of intelligent applications using all the data being produced.

The Fourth Industrial Revolution

To wrap one’s mind around the fourth industrial revolution, it’s important to understand the concept of intelligence. The best way to grasp intelligence is to think about how it is obtained, which is usually a four step process.

1) Gather data

2) Process the data using previous data as reference

3) Take action based off the refined data

4) Receive feedback data, learn from the result, and store it all in memory.

(A simple loop of intelligence; source)

The process is a cyclical loop of continually gathering data, processing it, taking action, and receiving feedback. The more times someone goes through the process, the more intelligent they become, assuming they’re able to learn from their actions. Two key factors underpinning it all are exposure to as much data as possible and developing impeccable pattern recognition skills.

Patterns not only point out what works vs. what doesn’t work, strengths vs. weaknesses, and trends vs. anomalies, but it helps people categorize information so it’s easy to remember for future use. Superior pattern recognition that leads to improved mental and physical capabilities is the backbone of harnessing intelligence. As Albert Einstein once said, “The measure of intelligence is the ability to change.” The only way someone is going to change is by being exposed to a negative pattern holding them back or see a better pattern to get ahead. The last step is implementation through will power and action.

If technology is to replicate intelligence and develop it into a digital commodity sold on the open market, then it must be harnessed using the same model. While most are unaware of recent developments, current technology is opening up new possibilities on this front, specifically due to advancements in the IoT industry, AI, DLT, and a few other macro trends. Utilizing advancements in hardware, software, and data, technology is on the verge of manufacturing intelligence. The autonomous economy is closer than most think.

The Internet of Things (IoT):

A major outgrowth of the Digital Age has been the mass production of data. It’s become such a recognized sensation that people started saying that “data is the new oil”. There are really two categories of data: public data and private data. The Internet is the largest oil well of public data and is unique because it’s an ever-increasing resource. Private data is mostly concentrated on private servers, especially in Clouds, and contains sensitive information that people either don’t want to freely share or don’t want seen. It shouldn’t really be surprising anymore that many of the largest companies in the world own the most data, like Google, Facebook, Amazon, and Baidu.

(It’s interesting to note how most of the biggest companies in the world revolve around teach and data, as opposed to resources just 10 years ago; source)

Most of the data gathered today is done through the use of applications, such as Google gathering data based off search results, or Facebook gathering data based off your social profile, or even Amazon gathering data based off people’s spending habits. Essentially, companies host applications that consumers willing use and then collect data metrics based off their activity. There are also open-source applications that anyone can derive metrics from like markets, sports, or open case records.

However, to harness intelligence capable of making quick judgments like humans, there must be access to real-time data. Until recently, real-time data has been hard to come by, but now, thanks to some major innovations in sensor and actuator technology, it’s become a real reality. All types of sensor activity is possible, such as sensors that measure temperature, location, speed, acceleration, depth, pressure, blood chemistry, air quality, color, photo-scanning, voice scanning, biometrics, electric, and magnetic force. Normally, humans are required to take such measurements, but that is quickly changing due to the mass production of cheap, yet accurate sensors and actuators. They’re not only placed in the environment, but within machines, like industrial machinery and robotics, and within/on humans, like a Fit or high-tech pacemakers.

(The various types of sensors and actuators that exist; source)

If there is going to be an autonomous economy, there needs to be a river of real-time information constantly flowing. The only way that autonomous action is effective is if it can respond quickly with confident judgments. Having the ability to monitor intricate details in real-time about a facility, its equipment, the environment it operates in, and even its workers (humans or robots), is transformational on many levels and has yet to be seen in mass. Essentially, everything, both physical to non-physical, is being brought online as data into an interconnected web, hence the name, the Internet of Things. It’s the human senses in digital form.

However, raw data is only as good as the filtering mechanism that analyzes it. Without proper analysis, applications would be like animals acting off instinct, which is why artificial intelligence is an important component of automation.

Artificial Intelligence (AI):

Whereas data is the fuel for intelligence, the brain is the engine that takes in data, cross-references it with previous data, sorts it into categories, makes judgments, triggers actions in the real world, and puts it into storage. The human brain is incredibly powerful and still remains a mystery to scientists. It’s the organ that truly separates humans from any other species on the planet, due to its cognitive abilities. As a result, replicating the human brain as a technology is going to be very complex and take a significant amount of time to master. However, breakthroughs are beginning to take place in the field of artificial intelligence, giving companies that ability to run software that mimics human intelligence in some form.

According to Adelyn Zhou, a leading voice in AI and the Marketing Director for Chainlink, there are seven types of artificial intelligence:

1) Act– systems that act based off rules like a smoke detector or cruise control.

2) Predict– systems that are capable of analyzing data and producing probabilistic predictions based on the data, like targeted ads or suggested content.

3) Learn– systems that make judgments based off predictions, such as self-driving cars that act based of sensor data coming in.

4) Create– systems that create based off data, such as designing an art piece, architecting buildings, or composing music.

5) Relate– systems that pick up emotions based of facial, text, voice, and body language analysis, such as voice to text application and facial scan technology.

6) Master– systems that transfer intelligence across domains, such as recognizing that four different pictures all represent the same idea/word.

(While it’s easy for humans to recognize all these pictures represent a tiger, machines using AI software have a harder time doing so. It requires exposure to a lot of data to master; source)

7) Evolve– systems that can upgrade themselves at the software or hardware level, such as humans in the future having the ability to download intelligence into their brain like it’s software.

The basic idea is that new software is able to take in new data, process it against huge databases of stored information, make judgments that lead to real word actions, and receive feedback that can used to learn from. The whole process is nothing more than a software algorithm that’s able to evolve the more it interacts with data. It’s no wonder AI is becoming the main focus of Google considering they have the most data on Earth.

While most people might not think of streaming songs from Pandora or suggested videos from YouTube as artificial intelligence, that’s exactly what it is. YouTube servers offer a wide variety of videos on the platform, users click on videos they want to watch, they give feedback on those videos, such as a thumb up/down or leaving metadata in the form of how long they watched the video, and the feedback is then used to update the software algorithm. The AI software can also take someone’s activity and cross reference it with the data of other users who like similar videos, to then suggest better selections. Effectively, it’s self-evolving algorithm changing based on input data. This type of AI is referred to as machine learning.

Some of the more recent advancements however, have come through the development of neural networks used for deep learning. Neural networks are a subset of machine learning that centers around algorithms modeled after the human brain, specifically recognizing patterns and categorizing/classifying information by comparing it to known information. Deep learning is a type of neural network that has layers based on related concepts or decision trees, where the answer of one question leads to a deeper related question until the data is properly identified.

The main idea is to design software that can make decisions based off of data instead of human intervention. Today’s software perform simple functions based off inputs, but AI software take actions across industries and evolve in the actions it takes based of its ability to take in a much larger set of inputs. AI software is intelligence in digital form offered to the wider public as a technology. Most people only think of robots as AI, and while there are certainly intriguing breakthroughs in that field, the software is the key to it all because what’s a body without a brain?

(Companies are increasingly realizing the importance of adopting AI technology; source)

There are already many industries using AI software to increase their bottom line. One example is SAP HANA, an intelligent database that’s able to take in all types of information from the company, process it, and spots anomalies. Companies like Walmart use SAP HANA because it can process its high volume transaction records within seconds, all in one spot. It not only saves money due to a major reduction in the labor needed to reconcile accounts across different systems, but it spots errors before they happen and suggest leads for the company to pursue. It also aids in budget forecasting due its ability to cross reference real-time data with large silos of existing data. Companies are slowly beginning to run themselves, minus some managerial oversight.

Governments are also leveraging AI technology to improve cities. One example is the transportation system in Pittsburgh, where instead of relying on pre-programmed cycles, lights have been equipped with sensors that monitor traffic movements and respond in real-time to maximize flow. It also happens to be the city where many automated cars are being tested, which use embedded sensors to monitor the environment, as well data feeds from traffic sensors to operate autonomously.

With commoditized intelligence now being made possible thanks to copious amounts of data and intelligent algorithms, the final step is to erect infrastructure for it all to communicate on in real-time with little to no friction. That new infrastructure appears to be distributed ledger technology.

Distributed Ledger Technology (DLT):

Human intelligence is so remarkable because it’s collaborative, meaning the social reservoir of knowledge is a result of intelligence interacting with other intelligence. Having barriers between two intelligent systems slows down growth because it inhibits connections from taking place. The more connections that happen, the more intelligent something can become. In order to maximize connection in society, all systems need to be able to easily interact with one another so data and value can move freely within society.

The ideal infrastructure for an autonomous economy requires a database, a processing layer, a transactional layer, and a connectivity layer, which allows any system to receive inputs and send outputs to any other system. The network must be secure, operate in real time, and provide confidentiality options when needed. It also must provide receipts for all parties involved, be cooperative with the law, and properly monetize the value on it. Finally, it must be be permissionless and public to facilitate the network effects needed for maximum connection.

First, it’s important to understand the term distributed ledger technology, which is just an all-encompassing term for a family of technologies centered around shared distributed ledgers and decentralized databases.

Blockchain & Other Shared Ledger Technology

Blockchain, the most well known DLT, is a shared storage layer able to process its own transactions and store the results in a common ledger. It’s powered by a distributed network of computers all running the same open-source software. Besides initial setup and periodic maintenance performed by each individual running a client application, a blockchain is a completely automated and self-run network, able to reach perfect consensus, while leaving no central point of attack for malicious actors. In fact, it can be argued that blockchain as a technology is the most secure database in the entire world. No central authority is needed for a public blockchain, anyone can use the network and build applications on top of it, and transactions are peer-to-peer (P2P), instead of having intermediaries between parties. Similar to how the Internet blew up for data transfer due to its permissionless nature; public blockchains could have a network effect explosion as the dominant databases and mediums of exchange for both the human and machine economy.

(Network effects is possibly the biggest reason public blockchains will see mass adoption at some point in the future; source)

Blockchains are often differentiated by the way the network reaches consensus and who’s rewarded for helping achieve it. There are a variety of blockchain consensus mechanisms, such as Proof-of-Work (POW) in Bitcoin, Delegated Proof-of-Stake (DPoS) in EOS, Delegated Byzantine Fault Tolerance (dBFT) in NEO, Practical Byzantine Fault Tolerance (PBFT) in Stellar, and Proof-of-Stake (POS), which has yet to be fully achieved, but Ethereum is pushing to be the first. There are also permissioned blockchains, such as the IBM Hyperledger, that only allow certain parties to use the network, akin to a private consortium. There is a lot of doubt though about permissioned blockchains actually being beneficial once public blockchains become scalable and allow privacy. Similar to the Intranet vs. Internet debate, what’s likely to occur is that permissioned chains have their niche use case, but ultimately public blockchains will become the main highway of interconnection for value transfer around the world.

There are other forms of DLT too, that offer similar proposals to blockchain. These include Directed Acyclic Graphs (DAGs) like IOTA and NANO or technologies like Hashgraph and Holochain that use gossip protocols instead of full network consensus. The overarching theme though is that all these databases store and process data on a common distributed network. As Blythe Masters of Digital Asset puts it, it provides a “golden source of truth”.

Smart Contracts

The second most known DLT are smart contracts, which are protocols within the blockchain that mimic legal agreements and courtroom judges. Economies require all types of agreements and arbitration of those agreements based of real world outcomes. Smart contracts are able to recreate this in the digital world by using if/then statements to trigger transactions based on the contract’s state. The basic premise is that a contract is coded just like it would be written, using if/then parameters. An example would be a derivatives contract where, if the product hits a certain price, then the customer gets paid out, but if not, then the customer pays the other party.

(An example of how smart contracts trigger automated actions within an economy; source)

While IoT gathers data and AI processes data, smart contracts are the software infrastructure that uses data to trigger actual actions, such as payments, transfer of data, or storage of an outcome. It’s comparable to the human handshake in a business deal or a human pressing the SEND button to trigger an action. Since smart contracts reside within blockchains, they gain all the security advantages that come along with it too. Smart contracts are really a functional transaction layer that triggers autonomous actions using data to create what can only be described as a self-run economy with automated movement of value. Smart contracts represent real-world action and trade.

Oracles

Finally, the more hidden, but extremely important DLT are oracles. Oracles are the bridges that connect all types of systems together. They are the Internet of the blockchain ecosystem, bridging the gap between the legacy world (off-chain) and the DLT world (on-chain). Oracles allow off-chain data, such as that sitting on the servers of big data providers, to be fed into smart contracts to interact with the contract logic. Oracles also allow smart contracts to push data onto other systems once the contract logic has taken place, such as a smart contract triggering a payment on an external system like SWIFT, PayPal, or sending files to another blockchain. Simply put, oracles are the connectivity layer that glues everything together so all systems can communicate.

To achieve connectivity, oracles leverage Application Program Interfaces (APIs) as the end points of connection. It makes sense too because APIs have exploded in use over the last decade. A simple way to think of APIs to think of them as microservices that others can leverage instead of spending the time and money to code up their own. For example, to build Uber, someone could simply use a GPS API, a Payments API like Stripe, and an SMS API for messaging. It allows developers the ability to focus on the core code and functionality of their app and use add-on services through APIs for the rest.

(The API economy is vital to future technology)

Finally, oracles need to have secure and reliable lines of communication between systems for automation to truly be trusted. The problem with most oracles today is that they are centralized, either through central oracle service provider or one of the parties in the transaction building their own . Centralization creates easy attack vectors for malicious actors and opens up the possibility of a service provider tampering with the data feed. A scary situation is one where a high value smart contract malfunctions because the centralized API service provider was hacked, had an employee tamper with the data feed, or went offline down for maintenance. Efficient automation is about limiting vulnerabilities because vulnerabilities require human intervention to clean up. It could get extremely messy too and lead to a chain reaction of failures all based off the same root cause.

(The functional connectivity offered by Chainlink)

To truly solve the oracle problem, decentralization is key. Chainlink (LINK), a DLT startup, is currently the industry leader in aiming to provide a decentralized oracle network for trustless connectivity between systems using API middleware. It does so by decentralizing the lines of communication between smart contracts, limiting a central attack point, and by providing secure off-chain computation where oracles can’t see the details of transactions they’re servicing. Off-chain computation has the added bonus too of adding fast, cheap processing for smart contracts. Secure, confidential, and scalable transfer of data could be the last step in leveraging smart contracts to make real economic decisions in automated and intelligent ways. (Read more about Chainlink here)

Putting it all together reveals a system with blockchain as a database and processing layer, smart contracts as a transactional layer with functionality, and oracles as a connectivity layer with added functionality. Theoretically, it can all be decentralized into an autonomous network with little to no security flaws. The last step is bringing to closer to real-time, which hasn’t been achieved yet, but off-chain computation, sidechains, and second layer solutions like lightning networks are bringing it closer to a reality.

Before painting the full picture of how IoT, AI, and DLT synergistically come together. it’s important to recognize some of the other major macro trends happening in both technology and society that are further driving the world towards a fourth industrial revolution.

Macro Trends:

Most of the macro trends are all fusing around the same thread of an interconnected global economy that’s increasingly open, moving towards real-time, and being run by automation. The most obvious macro trend is globalization, as developments that grew out of the Internet have brought communication to real-time across borders and allowed affordable travel to the average consumer. In fact, calling people across the world is basically free now thanks to apps like Skype and WeChat and almost all countries are open to travel too. Cultural and technological barrier are quickly disappearing, especially as voice-to-text applications become popular.

The second macro trend is Moore’s Law- the observation that the number of transistors in a dense integrated circuit doubles every two years, aka the processing speed of computers double every two years. This phenomenon could be a leading factor in the development of scalable DLT technology. It could also be a leading driver in the development of intelligent robotics. Robotics itself is a growing industry that could displace most manual labor jobs, but combined with advancements in AI and fast computation, could really open the market for affordable robot uses. Many forms of robotics are already in use, like household robots that sweep and mop the floor or industry robots like those at Amazon warehouses that move around and fill orders.

(A graph showcasing technology growth in accordance with Moore’s Law; source)

Another major macro trend is the development of increasingly better forms of biotechnology. The basic idea here is a merger of humans and machines by putting machines inside or on humans. There’s a ton of overlap here with IoT devices and could lead to future applications that monitor the health of different functions in the human body. Readings from the devices can help people maintain good health and possibly even be used to trigger smart contracts for health insurance discounts. In the more distant future, biotechnology could lead to brain-to-computer interfaces, where the mind is able to download intelligence like it’s software. Biotechnology is one of the largest trends heading into the future.

Finally, the last major trend is the open-source movement. Laws are being passed all around the world mandating that companies and/or governments open up their APIs for outside entities to leverage, such as the PSD2 law in Europe requiring all banks to open up their APIs for fintechs to use, as well as President Obama passing the 21st Century Act, which mandates that starting in 2018 all electronic health records are to be made available through open APIs. There is also an open-source movement around software, with DLT technology being the premier example. Finally, knowledge and news are moving open-source using the Creative Commons license. The more open data becomes open, the faster the hive mind grows because it open up more chances for connection.

Conclusion

Based off the information presented, a picture is starting to take shape, with graphics showcasing just how the future might grow over the next fifty years. The model that’s beginning to take form is world where data is a resource of increasing supply thanks to large data providers, IoT devices, and the Internet. The data can be leveraged by AI algorithms that refine it and use it to take intelligent actions in the real world. These actions are facilitated by DLT technology that connect everything together, trigger the reconciling of trade, and record it all in a shared ledger. Once the networks are put in place, they run themselves and can ever grow smarter over time. This is the fourth industrial revolution.

The human economy is not going to disappear completely, but it’s safe to say that the autonomous economy is going to eat into it more and more each year. Robotics are going to displace manual labor and data driven, AI enabled smart contracts are going to replace intelligent labor, such as lawyers, accountants, third party intermediaries, data entry positions, and insurance adjusters. It’s just the beginning too, as an immense amount of developers are likely to start flocking towards developing AI algorithms and smart contracts. The more people that get involved, the more velocity the movement picks up.

While it’s a scary proposition thinking about all the labor loss on the near horizon, it’s a potentially freeing technology if implemented ethically. Humanity is going to be forced to seriously rethink social and economic systems in a post scarcity world. This problem will no longer revolve around production because there will be enough for everyone and it will be handled by machines. Instead, the problem will reside around distribution, which could become highly political, but is a necessary growing pain of a changing society.

It should be noted that the technology is still very much in its infant stages and will require many developments before it sees wide scale implementation. That shouldn’t be surprising though, as every industrial revolution of the past has been a series of technological breakthroughs rather than one epiphany moment. It’s unsure what the future holds, but one thing that is for sure is that automation isn’t going away anytime soon, so it’s better to accept it and work with it than fight it to no avail. Remember, intelligence is the recognition of patterns, so the only question that remains is do you see the pattern and if so, will you evolve to take action from your understanding of it?

The past cannot be changed, but the future is yet to be conquered…

Follow me on Twitter: @Crypto___Oracle
Follow me on Steemit: @thecryptooracle

Check out my additional articles on my BlockDelta profile.

[Total: 3 Average: 5]
Tags
Show More

Related Articles

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Close