Ramkumar Ramachandra<p>I'm trying out <a href="https://mathstodon.xyz/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> in the context of writing code. My conclusion based on a few days of intensive use is that it is alpha-quality, with a suggestion reject-rate of 95% on any real software project like <a href="https://mathstodon.xyz/tags/LLVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLVM</span></a>. The 5% of good suggestions appear when you're editing one instance of a pattern, and want to change all instances; things like paren-matching are also taken care of automatically. The little automation comes at the cost of putting up with bad visual feedback nearly all the time, and it can take some time to get used to. It is, by no means, "smart", but this technology offers a way to automate things that can never be automated by classical software.</p><p>I also tried it on a toy project: a tree-sitter-based LLVM IR parser. In this case, the entire task is a mechanical chore of reading docs/ample examples and encoding the knowledge in the parser. For kicks, I tried to generate the entire parser with the technology, and the result turned out to be so bad, I had to delete it. Then, I started writing the parser, and the suggestions were actually quite good. The best part? I generated 300 tests to exercise the parser automatically (I had to tweak very little)! Of course, the tests aren't high-quality with over 30% redundancy, but this is a toy project anyway, so who cares?</p>