Search
  • Eitan Netzer

Mistakes of AI experts

Updated: May 18

This short article will cover 7 common mistakes I have seen machine learning engineers make from juniors to seniors.

Those mistakes I collected from conversations with colleges, from questions during my lectures and discussions over social media.

  1. Oversailing, people imagine the terminator…

This mistake is typically made in the sales process. The present AI is an all-powerful being that only they can control.

This issue has miss led many to confess AI and magic. In some cases, the DS team is asked to get some unrealistic goal, leading to the project due to failure from the get-go. To increase the chance of success, one should break the goal into small business questions, which one has valid quality data that can support your question. Consider involving your analyst more in the process, new both the business and the data back and forward. Unfortunately, you believe you own relevant data many times, but in fact, the data is unusable again. Let your analyst led the way. Knowing your data is much more important than knowing the latest STOA algorithm.


2. AI is a “new” field, which means that most experts grow up in academic research.


This usually leads to two negative phenomena.

  1. Sandbox, in academic research, the data is its own little bubble. Usually, the researcher tests his work using the same data. The fact is that in most cases, the data is a simplification of real life. And what worked in the lab (or sandbox) will not work in the dedicated process or system.

  2. Data is alive. In the lab, the data is static, and no new data is accumulated. But data is a changing process. For example, when modeling user behavior, if a user will not change is way is wrong. The Data changes all the time, and so should the model, something that academic research can usually disregard.

3. Machine learning developers prefer STOA instead of fundamentals…


I remember one time I gave an advanced tutorial at a professional conference, some “experts” in the crowd were displeased with starting with some basic terminology. They kept asking about advanced staff and architecture, showing a lot of knowledge of the latest paper being published (better than I tracked it). When I got to explain cross-validation (k-fold), a basic concept in machine learning and statistics, suddenly all “experts” had tons of questions to ask…


They knew the latest paper by Facebook research, but not cross-validation? That well to learn the advanced staff but skip the basic is a dangerous trend.

The fundamental problem of machine learning is the “generalization,” meaning how your model generalizes or reacts to unseen data. Cross-validation is one of the most effective tools to test “generalization,” how can you try and apply machine learning without that tool?


4. Hipster hyper hyperparameters… why not more data.


After the pipeline is ready and you get some results, I see people struggle to do hyperparameters (hopefully automatic). Better parameters of the algorithm could improve your result, but most of the time, it will be a modest change (otherwise, I feel it will be hard to trust your model).

Instead, you should take what you got so far, try in the real world, and clean some more data to improve your model, especially when it failed.


5. Skip EDA to modeling.


Running AI algorithms on the data is probably considered the most fun part of the project, where you apply sophisticated models. But rarely is that this part is the most important one.

I argue that the data exploration part is the most important step of the project. Checking that your model has any statistical importance with your business theory. Cleaning and mining your data might be draining, but it is the only way to make sure you have quality data before committing to a task where there is no information in your data to solve.


6. Committed to answer tasks instead of asking the business question.


Most AI is engineers, not business-oriented. I admit I started like that as well, I was given hard problems to solve, and I was eager to solve them. I did not think about it, and then what?

In one case, I was given an interesting problem, predict when a driver would crash his car a few minutes before he does. I was given IoT sensors of his care, and I got to a model that did a fairly good job. But then we hit a wall…

How to use that model without being “big brother” to the company customer and drive them to the competition…

We solved a tough data problem, but we have nothing of value to with it.

Later we found other uses to a similar model, but that was a big lesson that the business should lead the research!


7. Computer Expert, not Domain Expert


Every business has its unique twist and data. Finding a data scientist with that specific knowledge could be hard and expensive. Training one could take a long time. Most machine learning developers come with a computer science background. Adapting to a specific domain could take a few months to years. One should consider how to empower his current analysts already in the system with predictive modeling capabilities.


2 views0 comments

Recent Posts

See All