Would something like the following be feasible even though accelerometer isn't listed as one of the native capabilities?
``` Build an app that uses the accelerometer to measure the g-forces experienced while driving, and uses an audio tone with varying volume to indicate the magnitude of g-forces.
Show a large circle on the screen with a single black dot to represent the current g-forces for X and Y. Show a vertical rectangle on the side with a single blue dot to represent the current g-forces in the Z direction.
Using a single audio tone, vary the volume of this tone to represent the sum of X, Y, and Z g-force readings. At standstill the audio should be silent. ```
I have wanted something like this for a long time for driving to get a better sense of whatever cargo or passengers might be experiencing.
It seems to me that with the transformer model world we now live in, utilities like these are excellent at generating things that appear in many tutorials/posts/etc. However, there's little-to-no evidence that they are able to generate anything truly custom. For instance, any application requiring any significant level of domain-specific knowledge (your last point), and given the model's architectures there's no reason to believe that will occur. What is it that makes you so confident that will become a possibility?
I recently used ChatGPT and Claude to help myself build a simple app for my wife. Just listing things and being able to edit them basically. It was VERY frustrating as both models get pretty basic things wrong around the plumbing stuff like configuring SwiftData or CoreData.
I tried it with a version of the prompt(s) I used in that project and I got an app that's just a white page.