Historically we had expert systems, formal logic, hand-authored decision trees (eg old school machine translation), domain-specific algorithms (eg A* pathfinding), and more.
Nowadays it seems like everything is some form of ML trained on real world data.
(I’m broadly lumping everything generated automatically from data into the “ML” bucket, including HMMs, Bayesian networks, stochastic modeling, etc.)
Maybe this is a good thing? The results definitely speak for themselves. I might buy that the scale of most modern problems simply overwhelms hand-authored approaches. Are we losing anything by putting all our eggs in the big-data-and-opaque-emergent-models basket, though? Are there any other forms of AI that we’re neglecting? Or am I just being old fashioned and naive?
First, a lot of those other things likely survive in unfashionable nooks of the world that the new ML hasn’t yet reached or where data volumes are structurally small.
Depends on your definition. People certainly use A* a lot, but maybe it’s too mundane to feel like “AI” anymore? Also plenty of non-ML in robotics, like Kalman Filters, SLAM, etc.
My previous job was at an AI company that didn’t do much ML. Job scheduling and case based reasoning were the bread and butter. I think they are doing more ML now
there’s a great Alan Kay quote, “Technology is anything invented after you were born.” maybe AI is similar.
Both AI and machine learning seems like fairly vague and generic concepts which can applied to many things just as well as few things, all depending on how one defines them.
That said, eg. pathfinding is used in many places: Graph databases, 3D printing etc