Innovate faster with our
Framework and SDKs

CloudMade Solutions

Software modules that power intelligent vehicle use cases.

Benefits

Increase your speed to market and decrease your cost and risk in the deployment of machine-learning.

Launch your own in-house features or those developed by third parties.

Buy custom CloudMade use cases off the shelf.

Using CloudMade solutions you can:

  • Transform the user experience
  • Enable smarter, safer journeys
  • Form the basis of new revenues (OEM and partners)

CloudMade’s framework allows you to maximize data value using 3 learning approaches.

A unique architecture
for learning.

Our flexible frameworks architecture allows you to deploy CloudMade components within your own solution, leveraging your vehicle or smartphone sensor network and then computing onboard and/or in cloud to deliver predictions back to the device.

Adaptive Framework

The industry-leading cloud & SDK product for collecting and analysing automotive data sets.

CloudMade Adaptive Framework addresses the complexity of deploying machine learning and artificial intelligence in automotive.

Adaptive Framework components

CloudMade’s Adaptive Framework allows you to build intelligent
mobility solutions faster and with more flexibility.

Select the jobs you are focusing on to find out more about the different framework
components and how they can work for you.

Filter components by jobs (0)

Filter components by jobs

Select all

Deselect all

Data collection

Data collection:

  • Abstract and combine data, agnostic of device/OS
  • Manage data delivery in challenging environments
  • Data quality & privacy control
Data insights, analytics & processing

Data insights, analytics & processing:

  • Get fast access to data and visualise it
  • Extract basic features
  • Create ML modules for personalization
Multi-domain learning

Multi-domain learning:

  • Get automotive specific features (trips, events, settings etc)
  • Clean, split and fix data
  • Reuse ML & algorithms for features
New ML / predictions delivery and distribution

New ML / predictions delivery and distribution:

  • Make integration simpler with open APIs
  • Manage the delivery of models and features
  • Plug in algorithms from supporting suppliers
Create new predictive / adaptive features

Create new predictive / adaptive features:

  • Make coherent, real-time predictions across devices
  • Build modules using standard open APIs
  • Share predictions across features
Environment operational management

Environment operational management:

  • Making the framework scalable and modular
  • Software updates and upgrades
  • Managing your cloud architecture
Geo data distribution

Geo data distribution:

  • Process geo data with anonymizing and cleaning
  • Enrich map data with ML data
  • Distribute geo data efficiently by vehicle location
Models validation and operational management

Models validation and operational management:

  • Enable whole cloud-vehicle loop with ML APIs
  • Reduce time-to-market for algorithm development
  • Prove algorithm performance with reports and alerts

Feature preparation components

Toolchains that check the quality and integrity of feature data to create reliable datasets.

Journey builder

Processes data feeds across devices. Provides enrichment from third party sources and builds journeys for IE use.

Data validation

A process to filter and report on the quality and integrity of data feeds.

Data validation reporting

Reports and alerts on data validation.

ML delivery & distribution components

Manage the creation, updates and distribution of personal profiles that enable predictions across cloud, phones and vehicles.

Inference engine scheduler

Uses standard scheduling software as a base to schedule IE jobs.

Model repository manager

A repository to store personal inferences.

Profile builder

A user-device Edge service for management and profile distribution.

Cloud predictions REST APIs

APIs that provide a wrapper for the execution of prediction plug-ins in the cloud for use in portals, web apps or via web-api from other devices.

Context Monitor

A process that monitors context signals to determine whether predictions need to be updated, and if so executes them. Predictions are published and available at any time for standard context. Predictions for specific context (including what-if ad hoc requests) are available on demand via RPC like mechanisms.

Profile synchronizer

An in-vehicle process that manages the local cache of user profiles from cloud or on-board learning. On-board and off-board machine learning for a user is merged with specific on-board or off-board algorithms.

Security components

User authentication and confirmation; GDPR compliancy; onboard and offboard resiliency.

SSO service

User authentication service.

Data injection components

Acceptance and pre-processing of data from the automotive eco-system; passing features to events.

Data Import for appliances

A tool to load data into an appliance event feeder.

Event feeder

A service that accepts and processes data from multiple sources including vehicles, vehcile telematic systems, data lakes and mobile devices.

Streaming event feeder

A pre-processing stage to accept streaming data and execute feature extraction with links back to the source stream (source / index to allow inspection), and passing features to the event feeder as events.

Munic.io data processing toolchain

A toolchain to process and collect data from munic.io dongles.

Event logger

A process that accepts events, manages them in a queue, implements storage managment rules and submits the stored events for synchronization with cloud systems according to business rules.

Visualization dashboards components

Web-based dashboards for production maintanance and data science research.

Driver dashboard

A dashboard for exploring profiles and data.

Validation framework dashboard

A dashboard for production maintance or data science research related to the validation framework.

Journey/profile/etc viewer

Not customer facing, but provides a basis for enhancements to the driver dashboard.

Inference engines components

Use-case centric machine-learning algorithms for intelligent mobility.

Python inference API plugins

A set of plugins that provides learning (IE) and predictions for a specific domain.

Predictive routes and destinations inference engine

A job for creating predictive routes and destination profiles for vehicle predictions.

Prediction plugins 1 per engine

A portable plug-in that provides predictions for specific IEs.

Java IEs in cloud 1 per inference

A job for creating and training a ML model for specific IEs.

Machine learning API components

Components that deliver APIs for external service consumption.

Python inference API

A component that provides python API for model development in python.

Data API for prototyping (integration with sagemaker, etc)

A set of cloud-specific views that allows for working directly with journey data from within cloud-provider ML tools. Not implemented yet, but planned.

On-board learning manager

A vehicle process that manages and executes on-board inference engines to locally create inferences for inclusion in device profile and execution.

ML utility libraries

A broad set of Machine Learning algorythims tuned for use in learning and prediction on vehicle and mobile device architectures.

Geo data management components

Toolchains that enable hybrid geospatial data management processing for caching, searching and tiling services.

Map data import toolchain

Processes that convert, import or update third-party map data into a hybrid format.

Fleet learning server

An integrated set of processes that are a base for crowdsource data from various sources that feeds into hybrid map datasets (e.g. ACC usage).

Hybrid server

Service for efficient layered geo-data distribution.

Hybrid onboard server

An on-device service that manages local cache and services clients across the device/vehicle.

Hybrid client

A library that provides access to local hybrid cache and optionally syncs with a hybrid server, as required.

Hybrid place naming

A service that matches geo coordinates to place names using a variety of approaches. This feature can potentially integrate with proprietary services.

Hybrid streaming data management

A pre-processing stage to accept streaming data and execute feature extraction with links back to the source stream (source / index to allow inspection), and passing features to fleet learning or other hybrid storage systems.

Validation methods components

Quality checks for driver profiles and machine-learning algorithm predictions.

Validation framework

A set of jobs for checking the quality of machine learning algorithms.

Profile quality measurement job

A job that evaluates a driver’s profile quality using validation framework results. Enables profile suppression. Might be a part of profile builder.

CloudMade’s framework products are the result of years of development and are available right now.

If you are thinking of developing your own machine-learning framework, talk to us first before commiting valuable budget resources for internal development.
Please get in touch for more information about our any of our framework components.

Contact us

Products and Services

Use the adaptive framework to power value-adding experiences.

Inference Engines:

  • Specific machine-learning algorithms that turn data into predictions
  • Run individual use-cases or combine inferences for complex, hard to copy features that enhance brand value
  • Use your own or third party algorithms, or use CloudMade’s tried and tested product

Products and Solutions:

  • Touch the end-user with amazing personalized services
  • Use the Adaptive Framework to quickly spin-up new experiences
  • Build machine-learning powered infotainment bundles that can be monetized
  • Use your customers’ data for cost and warranty reduction across the entire fleet

Algorithms and Products/Solutions are all deployed on top of the Adaptive Framework

CloudMade’s Adaptive Framework allows you to design your intelligent vehicle architecture to maximise reusable components, then build and deploy services on top of that at your own pace.

We are constantly analysing and validating the quality of our algorithms and improving performance. If you are thinking of developing your own machine-learning inference engines or the services running on top of them, talk to us before using your own valuable in-house development resources.

Contact us