Smart Data — A Brief Timeline of Intelligent Technology

The unstoppable progression to “Smart Everything” also known as “Software-Defined Everything”

Software-Defined Data — or “Smart Data” is just one more step to Software-Defined Everything

Part 2 of 3 — Smart Data & Digital Rule of Law Series:

  1. Part 1: Creating Digital Rule of Law
  2. Part 2: Smart Data — A Brief Timeline of Intelligent Technology
  3. Part 3: Open Source Architecture for Smart Data and Digital Rights

This is the second part in a three part series about data and digital civil rights. We’re focusing on how humanity can personalize the technology around us by tailoring the uses of our data, even after it’s been shared. This requires creating a standard that attaches and enforces contracts to our increasingly mobile data. Recent advancements in cryptographic techniques like Smart Contracts, Zero-Knowledge Proofs, and Secure Multi-Party Computation finally provide workable solutions to this problem.

‘SMART’ means “Self-Monitoring Analysis and Reporting Technology”

Smart software-defined products are capable of environmental awareness, group intelligence, and can automatically respond to internal and external events. [1 Wikipedia]

A smart object enhances the interaction with not only people but also with other smart objects. Also known as smart connected products or smart connected things (SCoT), they are products, assets, and other things embedded with processors, sensors, software, and connectivity that allow data to be exchanged between the product and its environment, manufacturer, operator/user, and other products and systems. [2 Wikipedia]

All ‘SMART’ devices and sensors can now interoperate using Smart Data to share, secure, and orchestrate their collective intelligence:

  1. Smart Cities and Homes can interoperate with Smart Healthcare to improve community health — privately and securely.
  2. Smart Retail can interoperate with Smart Homes and Smart Industry for intelligent supply chains — privately and securely.
  3. Smart Transportation can interoperate with Smart Cities and Smart Industries to optimize transportation and energy costs while minimizing emissions — privately and securely.

The smart object concept was introduced by Marcelo Kallman and Daniel Thalmann[4] as an object that can describe its own possible interactions. The main focus here is to model interactions of smart virtual objects with virtual humans, agents, in virtual worlds.

Software-Defined “Smart” Everything

We’ve witnessed the slow and steady march of software control of complex hardware up the technology stack over the last twenty years from Software-Defined Networking and Network Virtualization to Software-Defined Storage and Storage Virtualization to Server Virtualization to Software-Defined Data Centers and Cloud Computing to Application Containerization and Application Virtualization.

Surprisingly, data remains the final tier of the technology stack that has not completed this software “smart” transformation.

Each step up the technology stack enabled automation which drove exponential cost efficiencies. Each step changed application architectures, allowing new business models, fundamentally altered technology roles, and the structure and operation of technology organizations. Arguably, each step up the technology stack was more disruptive than the previous steps. We believe software-defined data, programmable data, data virtualization with containerization, or data as code will be the most disruptive step up the technology stack thus far.

Each step followed the Law of Diffusion of Innovation, with early adopters and innovators taking risks and benefiting immensely from first-mover advantages. We expect software-defined data to be more disruptive than previous steps. Because software-defined data allows non-technical business leaders and non-technical end-users to control their data, they even orchestrate their data flows across multiple digital services. It creates foundational trust required to share exponentially more data to tailor the technology around us to our unique individual needs.

As we look at each step up the technology stack, four repeating themes reveal what we can expect from programmable data:

  1. Virtualization & Containerization as Software Abstractions
  2. Application Programmable Interface (APIs) & Automation
  3. Centralized Control & Distributed Enforcement from Policy Templates
  4. Increasingly Self Aware, Adaptable & Environment Responsive

Virtualization & Containerization

Networking, Storage, and Server Virtualization all represent intelligent software and highly redundant hardware that was previously a single point of failure — a single hardware device. These are kinds of software-created network, or storage, or server abstraction.

Application Containers allow code to be packaged and distributed across different cloud platforms. These cloud platforms are software-defined data centers we can rent on demand that uses software-defined networking, storage, and compute as underlying technologies.

Data Virtualization is an existing enterprise technology that helps data management professionals understand and manage data flow within an enterprise. It enables internal applications to retrieve and manipulate data without requiring technical details about the data, such as how it’s formatted at the source or physically located. It can provide a valuable master data source for a single customer view.

Data Virtualization does not currently extend beyond the organization boundary, limiting its ability to manage all organizational data. We propose adding Data Containerization that extends Data Virtualization to the network edge anywhere data moves inside and outside an organization. Data Containers enable secure and trustable distributed data automation across cloud platforms much the same way Application Containers enable safe and trustable distribution and orchestration of application code across cloud platforms.

Automation & Application Programmable Interface (APIs)

API’s are interfaces that allow different software programs to communicate and share data according to a set of clearly defined methods of communication. Developers can reuse and scale software architectures by making calls to available APIs of other programs. API’s are electrical sockets that connect programs and are the critical enabler of software automation.

APIs and their automation enabled the programmatic reconfiguration of networks and allowed traffic rules to adjust to changing networking needs. Or automate reconfiguration of server file systems to adapt their storage capacity based on dynamic and changing storage needs. APIs and automation enable entire data centers to be created within software, automated and dynamically adjusted based on changing user and application needs. All of this occurs with a fraction of the people it took to manage these systems without automation. Automation creates organizational agility and cost efficiencies while dramatically improving user experiences through enhanced consistency and quality.

Data Containers follow the typical data lifecycle (CRUD — Create, Read, Update, Delete), from creation to reads and updates to eventual deletion. Data Container lifecycle APIs enable distributed Data Container automation and management for all targeted containers at each step in the data lifecycle.

Centralized Control & Distributed Enforcement

We repeatedly see centralized control paired with distributed policy enforcement across many technology layers. Organizations can set and automatically enforce policy constraints. They can change these constraints across potentially thousands of distributed systems by changing a single policy statement.

Smart contracts have evolved into sophisticated distributed policy engines often associated with blockchains. Smart Data Contracts build on Smart Contract technology and apply the same ideas to automate data transactions and data agreements. They document data policies, regulations, licenses, terms of use, and user preferences in language software can execute and automate. Regardless of what attorneys call them, these Smart Data Contracts share the same underlying automation capabilities.

Here are some of the terms they can automate:

  1. Temporal: When can I collect and use data about you?
  2. Location: Where can I collect and use data about you?
  3. Duration: How long can I store your data?
  4. Aggregation: How may I aggregate data about you?
  5. Identity: What persona are you when I interact with you?

Zero-Knowledge Proofs allow a party to ask questions about data without revealing the data itself. Zero-Knowledge Proofs enable the following additional automation:

  1. Functional: How can I collect and use data about you?
  2. Proxy Entity: With whom can I share information about you?
  3. Proxy Purpose: Under what conditions can I share your data?

Taken together, a standard for automating regulations like GDPR, CCPA, CPRA, and others begins to come into focus. Imagine fully automating the right to be forgotten or the right to data portability? Imagine automating internal data policies as data flows inside and outside an organization?

While we can’t automate all data usage terms that might exist in policies, regulations, licenses, terms of use, and preferences, we now possess the technology to automate most of those terms. We believe that with the proper foundation, future innovators will discover many new and novel automation methods.

Increasingly Self Aware & Environment Responsive

Networks, servers, and applications became increasingly self-aware, adaptive, and responsive to their environment while operating within predefined administrator policies with instant recovery from every kind of hardware failure.

What if our data were self-aware and responsive to its environment? Imagine small open-source programmable data containers, encrypted with keys you control, moving around with Smart Data Contracts you create that constrain or guardrail the uses of your data — your personal data and your organizational data.

Applying these four themes to data enables unprecedented data self-awareness; this allows a new era of data agility, regulatory and policy automation, distributed and dynamic data architectures, transparent and accountable data usage, with trustable data security, provenance, and quality.

What’s Next?

The previous article Creating Digital Rule of Law we explain the mandate to tailor the connections any technology has to our world and our minds via our defendable, explicit, informed, consensual, unending and unlimited right to control the uses of our accumulated data.

Our final article proposes an open-source technology architecture for Smart Data to create Digital Rule of Law as an essential foundation to a free, open, transparent, and accountable digital society.

Want to help?

  1. Join the Smart Data Ecosystem and help create Digital Rule of Law to preserve all human rights.
  2. Follow Data Freedom Foundation and Accesr on social media and follow me on Medium. Data Freedom Foundation is on LinkedIn, Twitter, Facebook and YouTube. Accesr is on LinkedIn, Twitter, Facebook and YouTube.
  3. Contact us to get involved — we have many open roles and stock options for early supporters.

Part 1 of 3 — Smart Data & Digital Rule of Series

  1. Part 1: Creating Digital Rule of Law
  2. Part 2: Smart Data — A Brief Timeline of Intelligent Technology
  3. Part 3: Open Source Architecture for Smart Data and Digital Rights

Alan Rodriguez is an accomplished digital leader and patent author, strategy and emergent digital business models. He’s available to tailor an IP and digital strategy for a few select organizations.

alanrodriguez@accesr.com

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alan Rodriguez

Startup Founder, Inventor, Product Leader, Digital Hunter & Marketer, Data & Privacy Renegade, Philosopher, Digital Humanist