Has anyone seen this implemented? Doesn't look like there is any code to try or links to a demo.
They mention a video but the paper didn't include details but a quick Google found this (which I vaguely recognise now as having been posted on HN before):
I'm the author of the paper. The prototype was never released by Google unfortunately. The implentation we used was a bit hacky, using a visual effects layer over the text editing layer. It was tricky to keep them in sync. A more robust implementation would try to avoid that. I'm happy to discuss specifics in you're interested.
First off thank you for designing this. Both iOS and Android have been focused on streamlining their user experience in the past few years but unfortunately it seems that text editing is just as annoying as before.
In theory how would a "robust implementation" be designed to avoid two layers?
Has anyone seen this implemented? Doesn't look like there is any code to try or links to a demo.
They mention a video but the paper didn't include details but a quick Google found this (which I vaguely recognise now as having been posted on HN before):
- https://jenson.org/text/
- https://youtu.be/_9YPm0EghvU?si=NU3SDrKxAK7_5FQW
I'm the author of the paper. The prototype was never released by Google unfortunately. The implentation we used was a bit hacky, using a visual effects layer over the text editing layer. It was tricky to keep them in sync. A more robust implementation would try to avoid that. I'm happy to discuss specifics in you're interested.
First off thank you for designing this. Both iOS and Android have been focused on streamlining their user experience in the past few years but unfortunately it seems that text editing is just as annoying as before.
In theory how would a "robust implementation" be designed to avoid two layers?