Demystifying Edge Computing: Part 4 – Analytics at the Edge, Fog and Cloud

Analytics at the Edge, Fog and Cloud

We have spoken about edge, fog and to a much lesser degree cloud computing.  Each of them have pros and cons, but at the end of the day why collect data anywhere if you are not going to do analytics somewhere?  In my opinion, this is where the rubber meets the road. You absolutely need to introduce machine learning, and AI for your data to transform itself into a descriptive, predictive and/or prescriptive outcomes driving business value.

The challenge becomes identifying the location to run your analytics.  Let’s recap the basics:

  • Edge Computing:

      1. No latency (Closest to the “thing”)
      2. Limited to data collected at the edge
      3. Historic data is not accessible
      4. Minimum compute processing
      5. No integration into back office IT systems (operationalizing insights)
  • Fog Computing:

      1. Minimal latency (Not far from the “things”)
      2. Access to all data streams for the connected “thing(s)”
      3. Minimum historic data is available (30 days or less)
      4. Configurable compute processing power, that is typically much more then the edge, but still way less then the cloud
      5. Minimum access into back office IT systems (operationalizing insights)
  • Cloud Computing:

      1. Maximum latency (Very far from the “things”)
      2. Access to all data streams for connected “thing(s)”, IT systems, and 3rd party data sources
      3. Complete availability to historic data
      4. Ability to provision as much compute processing power as you need for as long as you need it for
      5. Limited only by corporate policies into IT back office systems (operationalizing insights)

As I stated throughout this blog series, the location of the intelligence will be correlated to the use case and expected results.  However, here are a couple of pointers to consider as it relates to the “where, when, and why” for descriptive, predictive, and prescriptive analytics.

Here are quick definitions of each:

  • Descriptive analytics are used to monitor the state of assets and processes
  • Predictive analytics use historical data sets to predict the state of assets and processes in the future
  • Prescriptive analytics provide optimizing actions or recommendations to business processes based on predicted future states

The matrix below can be used for an analytics location decision.  However, the use case will ultimately dictate the location to run analytics.

Examples of the use cases that should be considered at each level:

  • Edge:

    • Product Quality:  Visual (camera) inspection for surface level anomaly detection
    • Safety: Employee safety on a plant floor using Bluetooth and video technologies
    • Real Time Analytics
  • Fog:

    • Security: The use of cameras to identify a security threat
    • Energy: Sensor data converging in the cloud to determine peak usage and predict upper/lower thresholds
    • Real Time Analytics
  • Cloud:

    • Predictive Maintenance: Using sensor data to predict failure on a specified asset
    • Prescriptive Analytics: Process Parameter Recommendation for Quality Improvement
    • Almost real time analytics and batch analytics