On Wednesday, Feb. 28, micron.com will be upgraded between 6 p.m. - 12 a.m. PT. During this upgrade, the site may not behave as expected and pages may not load correctly. Thank you in advance for your patience.


Machine Learning: Training and Implications

By Doug Rollins - 2018-10-19

When thinking about Machine Learning (ML), do you imagine giant, global networks of massive, tera-flop supercomputers all working cooperatively? Or renegade code that listens to conversations through phones? Cool stuff, but science fiction (I love science fiction!).

When I turned off movie streaming and sat down to write a bit about ML, it got me thinking more. More about ML and existing ML systems with which we interact daily, some without even thinking. More about how often I interacted with ML and about the challenges and benefits. ML is often mentioned in an Artificial Intelligence (AI) context. While ML and AI are related, they are not exactly the same thing: Machine learning is actually a subset of AI. It requires vast amounts of data delivered quickly to develop high-performance algorithms.

Interacting with Machine Learning

One obvious example popped up pretty quickly: online shopping. I don’t know about you, but I tend to frequent a few online stores. I realized I’d been interacting with ML systems and that I’d seen a real benefit.

Some time ago I noticed that some of these stores were suggesting additional items. These were not the huge, multi-national online retailers, just the opposite – they are smaller, specialty shops. Their suggestions were so good that I’d think, “Oh yes, I did need that.” Then I added those suggested items to my cart before checkout.

But what about the not-so-obvious Machine Learning examples?

The article “10 Real-World Examples of Machine Learning and AI” from www.redpixie.com listed several examples beyond recommendation engines (behavior based shopping suggestion systems):

  • Voice based interaction systems responding with an activation phrase
  • Social networks that visually identify members are in relationships
  • Real-time navigation with location and landmark identification (from photographs)

The article goes on to list several other examples with which we all (probably) interact.

How did Machine Learning get us here? Training.

Training – the act of teaching ML systems what data is and is not, is a labor and time intensive exercise that can vastly improve algorithm testing and tuning. Here’s a very simple example of how one might train an image classification system designed to identify images of people.

Suppose we have a set of images, some of which could be pictures of anything – a shopping cart, an electrical tower – and some of which are groups of people or single individuals. Suppose we’ve already identified the images of people so we can tell when the algorithm is right (this is a training exercise after all).

We’d ingest all the pictures and our model would make judgement on which images were of people and which were not of people. Since we know which images are of people and we know which images our algorithm identified as people, we can use a comparison to evaluate, tune and improve our algorithm.

Machine 3

If we have a large data set (with images of people already identified), we could train our model to better recognize people and adjust it when it gets an image wrong.

It would learn what a “person” looked like.

Benefits of Successful Training

When we have more complex models – models that can successfully identify not just “person” but a specific person or spot (not just a trend, but a dangerous one) – what might we be able to do better, differently or more? This gets interesting … for example:

Image recognition:

  • OK, starting with a trivial example - when we have a model that can identify a photo of my dog, I can find other people with Golden Retrievers, or maybe find a local rescue group and volunteer to help, or maybe a local dog enthusiast group. Pretty cool, right?

Risk assessment:

  • We might abstract image recognition a bit to more general risk recognition and assessment. When we have a well-trained, accurate model to identify a risk like a developing storm and (again based on ML) plot a probability path, we may be able to act sooner.


  • Suppose a well-trained model could analyze massive numbers of images of people then identify members of high/medium/low risk groups for a specific illness. Or suppose we could identify and understand communicable disease spreading patterns by geography, travel patterns, the age of the affected, etc. to better understand conditions and patterns and intercept epidemics before they happen. Perhaps even pair expert care givers with those most at risk.


  • The benefits are extended into finance where current trends’ resemblance to historical trends can be determined and, if there is a strong degree of similarity, what the risks of a similar outcome might be.

These are a just a few examples, but more are being developed regularly.

Flash for a Rapid Training Cycle

The benefits of successful training are quite clear, but what might not be so clear is that successful ML and/or model evaluations rely on rapidly digesting immense data sets.

Taking a hypothetical example, suppose we are evaluating seven (or ten or a dozen) models for image recognition in an automated collision avoidance system, and suppose we have a particular interest in properly identifying people from other mechanized forms of traffic.

If we are under pressure to find the best model quickly (for an upcoming release) and due to our system’s data ingest rate we only have time to evaluate, say, three models – there is a real chance that the best model may not even be tested.

Machine 1

On the other hand, faster data ingest could mean that we can evaluate all the models fairly and equally. When flash can reduce the data set ingest time, we can complete our model analysis and find the optimal algorithm.

Machine 2

What Might That Mean?

When we shorten algorithm evaluation cycle times via faster data ingest with flash, that might mean that we find the best identification algorithm and meet our deadline! It may also mean that we can tune our models through faster iterative testing (again, benefiting from shortened run times).

Although this is a hypothetical example, the benefits of faster ingest (and faster model evaluation) are real. When we hear that a technology can help us get more done, many of us look at that statement with some doubt. But when we’re talking about rapidly growing fields like Machine Learning and the tangible benefits, we need to take a second (and third) look.

How do you interact with ML during your work life? Home life? Reach out via Twitter @GreyHairStorage.

To stay current on all things Micron Storage, connect with us on Twitter @MicronStorage and on LinkedIn.

Doug Rollins

Doug Rollins

Doug Rollins is a principal technical marketing engineer for Micron's Storage Business Unit, with a focus on enterprise solid-state drives. He’s an inventor, author, public speaker and photographer. Follow Doug on Twitter: @GreyHairStorage.