AI programming - skills.md anyone?
OK, AI discussion happened here already few times this year, and looking at the speed of AI programming growth and improvement, will happen again, and again, and again... Here is my contribution ;-)
Question first: Apparently we can improve specific programming area by providing "skills", which are additional instructions to what AI knows. At least to Anthropic Claude which I happen to be using (I would be happy to switch AI if needed). There is "skills-builder" I apparently could use, guided by watchful eye of AI. Before I waste my time building Igor skills myself, anyone managed to create such skills and would be willing to share? I am looking at you, Wavemetrics ;-)
By the way, there is Igor Pro Wizard in ChatGPT : "Expert in Igor Pro programming, debugging, and visualization" by Shuvam Sarkar. Since this is in chat interface, I have not found any much for it.
Now the observation/justification: After real success in Python "code generation" I have, carefully, on branch, let Claude loose on my Igor code (Irena).
I asked it to take ugly mess of code and create from three somehow related subpackages each with its own GUIs one new package with new GUI which can do work of those three - and more. I was surprised that the code compiled and functioned on first attempt and generally achieved results. IMPRESSIVE...
Now, GUI, that was different problem - complete mess - controls placed outside the panel, overlaying over each other - all were there, but not where they would make any sense. I spent some time redesigning GUI and testing at the same time. But, with much less effort (just typing the lines would have taken forever) I got a new tool and can delete the old one.
Later I used AI (Claude) to find a bug in the new code (AI was innocent, this was old bug which just washed up by this new tool) and it was much easier as AI easily navigated through the code to find offending function in completely different, decade old, ipf file. After that it was trivial for me or AI to fix it.
I also asked it to review my code and point any bugs it finds, reformat the code to nicely looking aligned code, and add doc strings/comments. VERY USEFUL.
Anyway, it is obvious that properly used AI for coding can be major timesaver. We need to utilize it or others - or other languages - will overrun us. Igor has lot smaller footprint in AI training so Igor skills are less developed. For example, it occasionally dreams up new functions - great ideas, useful, just not available. But properly designed skills package (e.g., ONLY use these functions: and list of all functions Igor provides) could make Igor Pro programming using AI much more grounded and robust.
Obligatory: yes, AI generates bugs, I know (me too!). One needs to check/verify every operation and every branch of code. But, we should do that with any code anyway...
I have found that Claude Opus 4.5/4.6 is pretty good at writing Igor code that compiles successfully and does what I want it to do. I've had it write several hundred lines of Igor code several times and there is often one or two compile errors.
A barrier to this kind of skill is that the AI using the skill needs to have a way to compile a file and get back the errors. As I understand it, our compiler stops at the first compile error (not just in a single file, but in all files being compiled within Igor's environment). There is an undocumented way to have Igor write that compile error to stdout instead of putting up the "compile error" dialog, but even so just one error is written. This feature is used by our CI and is intended to be used only on headless systems, which is why it's undocumented.
Once compile errors are fixed, I think it's conceivable that a skill could be written that starts Igor, includes a procedure file, then runs a command that does something to test the code that was written. We do something similar in our CI already.
IgorInfoForGitHub.ipf contains:
This runs as part of a GitHub action and writes some information to stdout and some to a file that GitHub creates before running the step and then reads after it completes. Capturing stdout on Windows is tricky and I don't know if AI running a step would be able to read stdout in the same way this action can read it.
That's a long way of saying that Igor has most but not all of the fundamental capabilities I think would be necessary to do what you're trying to do. So far we don't have anything more to contribute the the cause.
April 27, 2026 at 08:57 am - Permalink
You might be able to use the skills.md file to direct the AI to read our documentation from the new documentation website (docs.wavemetrics.com/igorpro/commands, or other relevant URLs within docs.wavemetrics.com). I've played around with this a bit, not in a skills file, but as an initial prompt. As long as it comes up with a valid URL to check, it does manage to digest parts of the documentation enough to answer questions and provide our actual code examples. I would assume this means it would produce more accurate 'novel' code as well, but I haven't thoroughly tested that theory.
April 27, 2026 at 09:19 am - Permalink
I've used github copilot succesfully via the web interface. See e.g. https://github.com/AllenInstitute/MIES/pull/2686. This was done without any compile feedback or skills file. The codebase is sufficiently large for it to come up with relatively okayish code.
April 28, 2026 at 09:14 am - Permalink
In reply to I've used github copilot… by thomas_braun
According to https://github.com/AllenInstitute/MIES/agents/pull/2686?session_id=b10fcbd6-09e7-4c61-a617-c02789769fb6 it looks like Copilot is calling your ./tools/run-ipt.sh script to do some linting, though in this particular case that didn't find any errors so wasn't truly necessary.
Copilot was using Claude Opus 4.5 for that run. We've found that Claude Opus 4.5/4.6 is pretty good at writing Igor code. We haven't tested other models much.
April 28, 2026 at 09:25 am - Permalink
Well, few hours and some tokens with Claude and here is my first attempt at skills
https://github.com/jilavsky/igorpro-skills
I tested that with my VSCode Claude extension on macOS and it sees the files and can use them. Not sure how much it helps in real life, but was more or less free. If anyone can improve on these files, please do so.
Note that I trained it on some of my own panels, so it will have some significant bias towards my dense panels at this time. Better, more generic, panel guidance would probably be more useful for general audience.
April 28, 2026 at 10:41 am - Permalink