Roughly 80 years ago many computers used analog logic, for example an amplifier with variable resistors could multiply and divide.
The matrix multiplication seems to be central in Transformer architecture.
I wonder if it would make sense to create a sort of Transformer on a vaguely similar concept?
If you wonder why this question, after all there are still guys who design digital computers with tubes or even mechanical relays, then why not analog Transformers?
https://hackaday.com/2023/12/05/a-single-board-computer-with-vacuum-tubes/
https://hackaday.io/project/189725-homebrew-16-bit-relay-computer
There's a lot of research going on in this space though, because yeah, nature can solve certain mathematical problems more efficiently than digital systems.
There's a decent review article that came out recently: https://www.nature.com/articles/s41586-025-09384-2 or https://arxiv.org/html/2406.03372v1
The problem then becomes training. The algorithm of choice is back propagation, which requires determining derivatives across the whole network. Doing training on an analog system would require tweaking each value as a batch of values is input multiple times to find the slope. This is impractical for large networks, as training usually requires a billion rounds.