When you're working at the algorithm level, you get funny looks... Even if it gets to state of the art results, who cares because you can throw more electricity and data at it instead.
I worked specifically on low data algorithms, so my work was particularly frowned upon by modern ai scientists.
I'm not doxxing myself, but unpublished work of mine got published in parallel as Prototypical Networks in 2017. And everyone laughed (<- exaggeration) at me researching RBFs which were considered defunct. (I still think they're an untapped optimization.)

... 1957
Perceptrons. The math dates back to the 40s, but '57 marks the first artificial neural network.
Also 35 years is infancy in science, or at least teenage, as we see from deep learning's growing pains right now. Visualizations of neural network responses and reverse engineering neural networks to understand how they tick predate 2010 at least. Deep Dream was actually built off an idea of network inversion visualizations, and that's ten years old now.