This latest installment into my “Collective
Intelligence” stream of consciousness is late. That tardiness, like most
everything in life, was driven by the master of most all decisions, Economics.
(I had to get some work done…)
Where we last left our hero, we were
exploring the concept of ubiquitous data and direct access to that data, all of
it, via an implantable access means and all the questions this raises – none
the least of which is, “how is money made through all of this collection,
provision, retrieval, syntheses and updating of information?”. Who gets paid
for making everyone a total genius?
You’re reading this right now, all 13 of
you, and, apart from someone paying for bandwidth and an access device, this
information is “free”. But what if you didn’t need an access device? What if
your brain itself was always connected to infinite sources of information or
“neural connectivity” to the internet (would you still read my blog?)?
It has been said “there’s no such thing as
a free lunch.” (Those of you who don’t know what that means can easily find the
origins of it via the internet.) If we assume that this ubiquitous and
always-on connection could exist, what would you be willing to pay for that ability?
It’s also been said that “knowledge is power” but – do you really need all that
“power”? I make a distinction between “data” and “information” with “data”
being defined as a mere fact (or a fictitious one) and “information” being a
synthesis of data providing an answer to a question.
With, seemingly, everything being connected
these days, data is currently being created and stored at a worldwide rate of
~2.5 billion gigabytes PER DAY and that rate of creation is growing. According
to a continuing study by EMC, the known digital universe is somewhere north of
~5,000 exabytes (EB) and forecasted to grow to 40,000EB
by 2020, doubling roughly every 2 years. (One exabyte equals a thousand
petabytes (PB), or a million terabytes (TB), or a billion gigabytes (GB)). So
by 2020, the digital universe will amount to over 5,200GB for every man, woman
& child on the planet. The Internet of Things is providing unimaginable
amounts of data and, at the same time, creating confusion on what to do with
all of it. Humans aren’t so great at sharing things they believe are valuable
so what is that model of valuation?
Even if you had access to all that
data, just by thinking about it, what could you possibly do with it? This, in
essence, is also what “Big Data” is all about – making sense of all that data
and turning it into usable information.
I asked if you would be willing to
pay for neural connectivity and access to all information. A potential supply
chain for access to usable information via a neural connection would have to
place value on connectivity, degree of access, quality of synthesis,
predictability of analytics, refresh rate & content ownership (licensing?)
since the digital universe would be the same everywhere (we won’t get into
censorship & privacy at the moment). So let’s examine the impending need to
value these new Internet of Things assets. The food chain in this virtual world
needs Connectivity, Access to Information, Synthesis of Information, Analytical
Ability, Currency of Data, & Real (accurate) Content.
Connectivity: Assuming that a neural connection to the internet is possible
via neural dust or some other implantable/wearable enablement, there is a cost
for the “device” which enables this connectivity. The question of obsolescence
is an obvious one as technology will change to provide better and newer
versions of these devices.
The rate of obsolescence in
technology is becoming so great many people and organisations have insisted on paying
for “usage” of devices and software, rather than purchasing them – hence the
explosive growth of PaaS & SaaS (Service as a Service?). Some suppliers are
even accused of planning obsolescence of their products in order to drive
sales.
So who then provides these
services? The creator of the devices or intermediaries providing a service (a
telco, perhaps?)? Connectivity, and charging models exist today but
connectivity alone wont be enough in the future world of collective
intelligence.
Degree
of Access: It is pretty clear that most
people would not require, or even want, access to all information on the
internet. Accepting that as a premise, what would be the method of charging for
access to the data, both at the provider level and at the storage level?
Currently data usage charges vary widely depending on the supplier, geography,
connection type and other services procured by the user. Clearly, some sort of
“user pays” model will be the accepted norm. But what if the user is also a
content provider? Is one person’s “experiential data” worth more than
another’s?
Quality
of synthesis: The early days of search engines,
they, invariably, turned up a number of results or retrievals which were very
irrelevant (and sometimes even disturbing). If one could be assured that the
query being lodged was going to be highly accurate, then that information would
be more valuable than questionable results from an inferior synthesis of raw
data. So, the quality of data being synthesized would be of paramount
importance and, therefore, very valuable, as would the “intelligence” of the
analytical engine. Should the model be “results-based”, “capacity-based”,
“volume-based”?
Predictability
and analytics: If I ask a question such as, “What
is the national capitol of the United States?”, there is a definitive answer to
that question – that being, Washington, D.C. However, if I ask, which national
capitol in the world is most likely to have the largest reduction in crime
rates over the next 5 years, that question relies on the collection of an
extremely large set of data such as current crimes rates, recent historical
trends in those rates, what the political climate is in those cities, the
accuracy of that data (see the next topic) and a massive amount of other component
data in order to effectively predict a valid answer to that question. Therefore,
being able to take the synthesis of data to a level of accurate predictability
would add huge value to the equation. Predictive analysis, here-to-fore, has
been limited to logical human thinking and rational, logical conclusions drawn
from information gathering. With this new collective capability, anyone could
arrive at highly probable predictions using “artificial intelligence”.
Refresh
Rate: With the rate of growth in stored data expanding
exponentially, real-time access to the latest data for synthesis is much more
valuable than a “snapshot” taken, even hours, earlier. Think how often airlines
refresh their arrival and departure times versus what used to be published in a
printed booklet just a few years ago and was of very limited use. Does anyone
remember the OAG (Official Airline Guide) that every road-warrior used to carry
in their briefcase? In areas like securities trading and foreign exchange,
microsecond changes can mean billions of dollars.
Content
Accuracy & Ownership: This is by far my
favourite. According to the Pew Research Center, it is estimated that roughly 7
out of 10 news stories on the internet are created by individuals not
associated with any news agency or authority. So, in other words, that
information is either an individual’s perception of a fact or, simply, made up
completely, with no fact checking. Since the accuracy of the information from
the internet is bound only by what is retrieved and consumed by the user, the
accuracy AND origin of that content can be questionable. How do we then attach
value to “true” content creation?
Also, a generation of millennials
has grown up with a perception of, “if it’s on the internet – it’s free”. Much
of the music, movies and other copyrighted material on the internet is given
little regard respective to its true ownership by many people today. According
to the Go Gulf Web Agency, 70% of online users find nothing wrong with online
piracy and 22% of all global internet traffic is used for online piracy.
If there is not a sure-fire way of
ensuring that a value is attached to accurate and original content, those true creatives
who develop this content cannot continue to operate. What will remain is what I
refer to as “hobby content” – content that is created by individuals just
because they enjoy doing it (my blog for instance).
Not that there is anything wrong
with this content, from an entertainment perspective, however, how does one
attribute different values for hobby content versus “researched” or
professionally created content like music & movies?
The attachment of value to things
outside the production and sale of ‘widgets’ has been called “platform
economics”. Professor Marshall Van Alstyne of Boston University studies
this new economy and has identified this “missing link” in the Internet of
Things. When one looks at the value people attach to companies like Apple,
Uber, Google & Amazon and what they benefit from, much of that value is
derived from assets associated with a platform, rather than the platform or
product itself.
In my next installment of Collective Intelligence,
we’ll discuss and question how the new world of Platform Economics can apply to
the supply chain of “The Machine” and also pose the question – “Are we already
IN the Machine?”.