In the very near future, the difference between the most successful and the least successful engineering teams will be data collection and knowledge implementation. Teams can already run millions of simulations and tests during product development, but few teams have a solid strategy for acquiring data from embedded sensors, incorporating software/hardware metrics into decision making and getting actionable feedback from the field.
The data-driven design approach has a broad definition, but generally, it involves the use of acquired data, metrics and technical information to create fully optimized, user-centric, and minimal error designs from the onset.
Data-driven design encourages a relentless cycle of incorporating qualitative and quantitative forms of data into the design and developmental processes of engineering products. It doesn't merely improve overall user experience, but it enhances the workflow of the teams and reduces the downtime spent on updates and servicing customer complaints.
Data driven product development saves engineers a lot of time, difficulty, and unnecessary cost of making uninformed decisions based on assumptions. Some of the benefits include:
Data driven design is the more cost-effective, time-efficient, and user-centric approach to product development in engineering. Systems of today are taking on a great deal of functionality and are doing more than they ever have. Today, we have fully autonomous vehicles, electrified airplanes, self-healing materials, and robots working alongside humans in several different fields. These complex systems and emerging technologies all have one major requirement in common: data.
Design and development processes of these systems rely heavily on collecting and analyzing tons of data and this is why data-driven design is the new normal. Data-driven system design propels a production model where activities are fully backed up by informed decisions and solid data, reducing development time substantially when compared to traditional system design processes.
Software engineering teams that adopt this approach are plagued by fewer uncertainties as the first product deployment or release is more likely to match the user's expectations when created with real, valuable and reliable insights.
The first point of need engineering teams have to address with data-driven design is establishing a steady stream of data from devices. Even after production, deployment and commercialization of products, constant updates are still needed to update, upgrade and maintain the product. This can be done using telematics devices. For example, self-driving cars are built with a number of sensors and cameras to obtain data for different purposes, such as location data for navigation, environmental and road condition data for safety, and traffic data for re-routing.
This collection of this information and its use in a data driven development process is the ‘secret sauce’ in autonomous vehicle development. It is only when this is done well that car manufacturers will be able to attain full autonomy and safe vehicles that people can confidently trust.
According to reports from Statista, approximately 15 billion connected devices exist on the Internet of Things ecosystem as of 2022. The IoT ecosystem consists of everyday devices fitted with embedded sensors and other technology to collect and exchange data with other web-enabled devices on the cloud. Examples of IoT devices include self-driving cars, humans with heart monitors, animals with tracking devices, cellphones, aircrafts, smart TVs, smart watches, etc.
IoT devices generate a massive amount of data that can make a world of difference in a company's production operation. While a lot of controversial opinions and relentless debates still exist over IoT, companies choosing the data-driven approach can save so much time and money while improving their overall user experiences, making better decisions, and mitigating chances of risk and failure before design or production even begins.
Basically every smart device or appliance we see today has some form of IoT gateway, and companies tap into this ecosystem to boost product efficiency from the manufacturing line up to the consumer's home.
For example, refrigerator manufacturing companies have sensors embedded in the factory belts that move appliances from one point to another during product assembly. These belts report information to the manufacturers about machine health and other operational statuses. The refrigerators are each stamped with a barcode containing manufacturer and model information. The barcodes are used by the manufacturers to keep track of the retailer’s inventory and ensure products are available on time.
In the consumers' homes, the sensors fitted into the compressors and other parts of the refrigerator continuously send data to the manufacturer regarding product health, possible functional faults to improve further model design and proactively schedule technician visits when needed.
Telemetry is a unique dimension in modern-day data collection, which involves the automatic recording and transmission of data from remote devices to another central device at a different location. Telemetry data can include metrics like user-favored features on a product or application/session monitoring, navigation difficulties, causes of downtime, signal processing issues, data breach attempts, and many others.
Data acquired from telemetry is extremely useful in data driven modeling and production, providing clear insights into necessary adjustments and innovation to improve user experience.
Sensors are some of the most important elements of modern technology. If they didn't exist, a lot of the technology we have today would be infeasible. Sensors form the front-end of most advanced data driven engineering processes, collecting data from the environment and/or their host devices. They are embedded in every device intended to collect or retrieve data in some form such as trackers, smart watches, self-driving cars, cellphones, and basically anything functioning on the IoT ecosystem.
Sensors are of different types and activity ranges, and they collect information such as temperature (in fire control devices), pressure, humidity, motion, images, proximity and so much more. They are a great option for data collection and can be embedded in products to connect these devices back to the manufacturer. The information collected can be used to drive better design ideas, improve safety protocols, manage user experiences, and enhance product efficiency.
Engineering metrics are numerical parameters that can be used to gauge overall performance of a company, team, or product. They provide really useful pieces of data for management and decision making. Several companies have centered their development around data and used data driven design combined with model-based development to improve engineering efficiency and reduce costs.
The following are classifications of engineering metrics and examples which can be used in conjunction with telemetry to build a solid data driven design architecture:
Companies must use a variety of safety metrics to track the performance of their products and ensure that they are safe for testing in production. This is particularly important for autonomous vehicle development where a crash could be fatal. Metrics used by such companies can be divided into three distinct categories: safety, performance, and usability.
Examples of the key metrics used in autonomous vehicles include crash avoidance, lane changing accuracy, braking response time, and blindspot detection. Crash avoidance is an important metric to track and measure. Companies use a variety of sensors and cameras to detect potential obstacles and react appropriately. This includes using object detection algorithms, which can identify objects in the car's environment and respond accordingly.
Performance metrics measure how quickly an autonomous car performs certain tasks, such as changing lanes or braking. Companies monitor different performance metrics depending on the type of vehicle being tested, such as acceleration rate and top speed. Additionally, companies track the car's ability to follow lane lines and maintain a consistent speed when on the highway or in traffic.
Finally, usability metrics measure how well an autonomous car interacts with its passengers and other drivers on the road. Companies monitor factors such as how well the vehicle communicates with the driver, how well the car navigates a route, and how smooth the ride is.
By tracking all of these metrics, companies can ensure that their products are safe and ready for testing in public. This helps them stay ahead of the competition in this rapidly growing field.
Velocity is a metric for accurately calculating the number of tasks a team can successfully execute within a time period. It could be calculated over weeks, months, quarters or a year. In the data driven process, this metric allows a team to precisely estimate the number of tasks to allocate and a reasonable margin. Velocity can be measured in hours or story points, where the latter is a project management metric that estimates the amount of effort required to complete a backlogged task.
Correctly estimating velocity and using it in the design process enables a team to set attainable goals, make adjustments accordingly, and increase targets efficiently.
Sprint burndown is a metric that shows the amount of work left to be completed before a sprint is over, rather than just showing completed tasks. The graph is usually plotted over time and shows how fast a team is working through their work load. It provides a different kind of visualization insights where team members can use it in conjunction with velocity data to determine how fast they expect to sprint through a set of tasks.
In data driven software engineering, the sprint burndown is an important forecasting metric because it allows the teams to get an overview of a design sprint trajectory before actual work begins. Teams can see their progress in real-time, track scope changes, and effectively estimate completion time.
Cost estimation forms the foundation for every engineering project. This process helps project managers and team leaders to estimate the resources and funds necessary to complete every phase of a system design process and ensure that they stay on track. Inaccurately estimating cost can grossly sidetrack the success of an engineering project, and overestimating the cost needed might get the project canceled. Costing is a fragile process in the data driven design approach. It must precisely include every engineering resource, storage and compute costs for all the data, materials needed to develop prototypes, manufacturing, assembly and handling costs, etc.
Estimating the cost of an engineering design project allows engineers to generate functional working budgets, make adjustments, prioritize certain aspects of the project over others, slash allocations where necessary, and optimize the overall process efficiently.
Cycle time is a measure of the time taken to complete an individual task. It's an important metric to measure how fast a manufacturing line can produce a product or how fast a product can complete a necessary computing or design step. These tasks could be anything from producing a widget to updating a server request. The time needed for some tasks could be a few minutes while others may take several weeks to complete. Cycle time allows the team leaders to make informed decisions about the product's internal functioning and how the manufacturing process will work.
Throughput is a data visualization metric that provides an engineering team with insights into their overall workflow performance. Team members can see where their tasks aren't corresponding efficiently and where bits of work are causing a bottleneck in the operation. Utilizing throughput in data-driven manufacturing sets the pace for an operational architecture to be developed. Each team member is assigned tasks in a way that enables a smooth collaboration to prevent delays and errors.
Throughput is measured using a flow diagram, a simple visual chart where tasks or roles are plotted against time, where overall cycle time and throughput can be easily calculated. Trends such as employee collaboration and task combinations can be formed using flow diagrams to predict the workflow.
This is an average of the time between a system error or failure to the time the system is fully functional again. It measures the amount of time engineers take to fully fix errors or repair problems, and it starts counting from the time the error situation is established to the time the system is fully functional again. An accumulation of MTTRs is averaged over a period of time for a final mean value to be determined, and generally, the smaller the better. MTTR is mostly used in software engineering teams for downtimes on servers, websites, and applications.
In the data driven design process, MTTR enables the team to estimate the amount of downtime customers may experience and to properly allocate resources for repair and reinstatement of services.
User support response time is a measure of the average amount of time it takes for a member of the support team to respond to a user's problem or successfully log a complaint. Generally, the shorter the average response time, the better customer experience of the product. On the other hand, longer support periods imply user and technical difficulty with the product and might reduce patronage on current and future products.
User response time is calculated from the moment a support request is initiated till when it's first acknowledged or resolved.
Despite its many benefits to modern technology, data-driven design still creates a few hitches in the developmental process. From data security and network porosity to storage problems, companies using data to drive enhanced user experience have to be comfortable with some of the harsh realities.
IoT devices and autonomous vehicles can generate up to 25 GB worth of data per hour and even with 5G networks or wifi, streaming this amount of data is still too expensive. For instance, autonomous vehicles aren't just sensing perception, navigation, localization and environmental condition data. They can also sense internal conditions of the vehicle, such as the user's heating/cooling preferences, seat adjustment ranges, and so many other streams of data.
Local servers cannot efficiently handle and process this amount of data in real-time. Estimates indicate that less than 10% of data is uploaded to the cloud making it difficult to get truly differentiated insights into a product in operation.
Data retrieved from sensors has to be analyzed and automatically processed before it will find any use in product design and development. However, in some industries, analyzing the endless stream of data coming through millions of sensors at the same time is a processing nightmare. When done poorly, it could be a major cost infraction on the companies to process, analyze, and store the relentless streams of data that would be ultimately used to improve development.
As any other approach to product development, data driven design must come together with collaboration, interaction, and cooperation. Many companies and teams struggle to ensure data, models and engineering designs are synchronized. This can cause merging conflicts, redundant work and worse yet, late schedules.
When design and data are put together, there are a few more complexities to deal with. Additional challenges that engineering teams face with adopting data-driven design are listed below:
It's one thing to generate useful data and another to utilize it effectively in driving design and innovation within your company. The following are a few practical tips to ease the transition to data-driven design:
From cloud computing problems and data analysis difficulties to cost efficiency and AI integration, Collimator is the only engineering tool in the world that can solve nearly all the problems with data driven development. Collimator is the world's first full-suite, python-coded and AI-embedded platform with all the necessary features needed to make data-driven design an enjoyable endeavor for your teams.
With Collimator, you can stream heavy terabytes of data into your digital twin, analyze the data using high-performance computing entirely over the cloud, regardless of your processor's computing power. Collimator also allows you to collaborate efficiently with multiple different parts of the organization so everyone is looking at and operating from one source of truth.
During the core design process, you can hasten your development because we have re-usable function blocks including full electric vehicles, engines, dynamics models that you can use out of the box.
Collimator is all you need to super-charge your next design sprint.
Book a live demo with our team to get started!