This has some very interesting privacy and security risks.
If the tech can do more complex frequency analysis, then couldn't it essentially be used as a microphone for a device that doesn't need permission.
I don't think that's realistic. If you're looking at the acceleration sound waves cause against a phone's accelerometer, that's likely far below the sensitivity of the sensor- phones are too massive relative to the force of sound waves from speaking. F=ma, so the acceleration you're looking at is the force of the soundwave (tiny) divided by the phone's mass (relatively large). The only reason this kind of works is because you're putting the phone on an object that's mechanically vibrating. I suppose it would work in certain situations like putting the phone on top of a large speaker, but you'd never get the resolution to decipher audio from sound waves alone for a phone sitting on a desk or in a pocket
It's a pretty well-known exploit that the CIA is capable of turning a lot of electronics with speakers into microphones. I imagine there is an entire classified backlog of things they can turn into microphones without the target's knowledge.
The CIA…plug a set of regular headphones into a microphone jack, open a recording application and speak into the headphone speaker, you don’t need a 3 letter agency for that physics open secret.
Wouldn't you need to rewire the headphones? Headphones use a 3-pin TRS whereas a 4-pin TRRS plug is used when you add a microphone. Regardless if the 4-pin is CTIA or OMTP, it's generally only going to get shorted to ground if a 3-pin TRS plug is plugged into a 4-pin TRRS socket, or if a 4-pin TRRS plug is plugged into a 3-pin TRS socket.
I am crap with physics but was going to say I think the last 50+ years of speaker development has been about making them less a microphone than they inherently are.
Fun idea and also I didn't know that websites could get access to my accelerometer data. However for me the sample frequency is 50 Hz which is way too low to measure even the lowest string pitch (E2, about 82 Hz).
If you know you have a single frequency close to an actual frequency of interest, you can use the fact you know you're in an aliased band to get a precise frequency estimate.
I guess thats sort of like a weird PLL thing? But I'd imagine you'd have to have prior knowledge of which string you're tuning otherwise the analysis is going to alias against every harmonic.
Presumably there is an antialiasing low pass filter somewhere before JS gets to the data. I have a similar sample rate and it certainly didn't work at all for me.
The neat bit is that it doesn't necessarily need to sample 82 Hz directly. If the sample rate is known and the target is one of a few guitar strings, the aliased peak can still be useful. The tricky part is probably rejecting the wrong alias once the vibration signal gets messy.
The very clear and succinct description on the landing page makes me miss the bizarre antisocial charming quirk that people who made things like this used to be stuck with for their copy rather than AI generated language. Our cacophony of experience is quieting.
I mean yeah, that's cool as a fun project. And I've also heard about a project that used accelerometers as microphones for surveillance. And while it's doable, even the cheapest crappiest mic would do a much better job at recording sounds for whatever is your goal.
The code mentions "autocorrelation" method: this is a method where you multiply the signal with delayed version of itself: result = sum(x[i] * x[i - delay] for i in some range). You vary the "delay" and pick the value that maximizes the result. This is based on the idea, that the sequential periods of the signal should be similar to each other.
Not a very good method, prone to octave errors (showing pitch one octave lower than the correct one). Furthermore, the "delay" is an integer which limits the precision, so you need to use some form of interpolation. Also it doesn't allow to recognize multiple notes sounding together. Also, slow.
You can read the paper on the "YIN" pitch estimation algorithm which describes the method in details.
I think FFT-based methods are more reliable. I did little experimentation and when measuring a pure sine wave, the frequency can be determined with high precision (tenths-hundredths of a Herz). Not so good in presence of a noise or multiple instruments - I tried to use descending from the hill optimization to figure out the pitch of each harmonic, but it didn't work out.
Ah, that does look like something I can work with - thanks for the legwork, I will check it out and see if its worthwhile converting to C/C++ for my device ..
Pack it up folks, nubinetwork has exposed the scam that is the guitar tuner industry. You don’t need a guitar tuner if you have ears; all the guitar techs and musicians who use them have bought into a lie. And obviously since guitar tuners are a waste of time, a tech demo showing that you can use the accelerometer in a commodity handheld device to pick up minute vibrations with sufficient accuracy to detect guitar tuning from a web page is just feeding into the hands of Big Tuner.
Seriously, this is the very definition of a shallow dismissal.
If you have a good external reference point. But it's also pretty easy to have your tuning drift quite a bit away from E standard if you solely rely on the strings. Getting a standard tuning is not the same as getting the standard tuning you want, exactly. This is especially true if you play in standard tunings below E, like C or B, where strings can be looser than the norm.
Pro guitar teacher here with over twenty years of experience teaching the guitar, and close to fourty years of experience playing the guitar. I still struggle with properly tuning my instrument by ear. Nothing wrong with my ears. It's just not easy to do this right.
For me it's hard because of tempered tuning, so each string should be slightly out of tune for everything to sound good. If I tuned by ear, I could get two open strings to match, but all the fretted notes and chords wouldn't sound good. On my ukulele I even tune one string down on purpose to make the fretted note sound better. And then there is the inharmonicity of overtones, and some strings have more noticeable overtones that influence how the pitch is perceived.
When the right defender is near the center I'm reading ~24.74Hz, so slightly above G.
2011 https://www.researchgate.net/publication/221609349_spiPhone_...
Diagram: https://i.sstatic.net/8rSD2.jpg
What would that filter look like?
but I don't think it will work well for this case.
My 6-string Kiesel Kyber bass would like a word with you while it sounds 41Hz.
And if you don't even have that, use a speaker/headphone as the microphone, probably also better results.
Not a very good method, prone to octave errors (showing pitch one octave lower than the correct one). Furthermore, the "delay" is an integer which limits the precision, so you need to use some form of interpolation. Also it doesn't allow to recognize multiple notes sounding together. Also, slow.
You can read the paper on the "YIN" pitch estimation algorithm which describes the method in details.
I think FFT-based methods are more reliable. I did little experimentation and when measuring a pure sine wave, the frequency can be determined with high precision (tenths-hundredths of a Herz). Not so good in presence of a noise or multiple instruments - I tried to use descending from the hill optimization to figure out the pitch of each harmonic, but it didn't work out.
Seriously, this is the very definition of a shallow dismissal.