Using GitHub Copilot to generate Igor Pro code
I have been experimenting with using GitHub Copilot from within VSCode to write Igor procedure code. This is pretty easy to do. I don't have a step-by-step tutorial, but generally speaking you need to have GitHub.com account and use VSCode with the GitHub Copilot Chat extension installed. You may need to enable Copilot usage in the GitHub Copilot settings page.
I have a personal paid monthly plan, which costs around $10 USD, but I believe that a certain amount of free monthly usage is allowed. A paid plan provides a certain amount of monthly usage above the free tier amount and you can set a budget to allow additional charges if you use the amount provided by the paid plan. My understanding is that the "premium" models require a paid plan, so if you are using a free plan you may not be able to use the same models I used to generate the code. It's possible that the free models would also do a good job--I haven't investigated different models very much. The monthly price varies depending on who is paying, and if your GitHub.com account is part of a team or organization account they may prevent you from using Copilot or set other restrictions.
I have not done very much configuration of the chat extension. At the bottom of the window, below where you write the prompt, I have "Local" selected for Delegate Session, "Agent" selected for Set Agent, and the model indicated below for Pick Model.
Caveats:
My general impression is that the quality of Igor code generated by Copilot is surprisingly high, but it also makes dumb mistakes that sometimes have serious consequences. If you are using AI to write important code (particularly for data analysis, data loading, etc.) you should carefully verify that the code itself is correct and that it does what you intend. You also must remember that in the best case AI does what you tell it to do, not what you want it to do. You may need to be overly precise in your instructions to avoid it doing the wrong thing, particularly in cases where the expectation is unclear.
I plan to periodically use GitHub Copilot to write Igor Pro code in response to questions from users. I'll post my replies in the relevant threads and link to them from here.
Claude Opus 4.5:
For comparison, I tried Claude (at the free tier) some time ago to review an existing procedure. I was exceptionally pleased with the step-by-step analysis it presented to help me significantly improve what I had created. I can highly recommend it for such review processing. I cannot comment on its ability to generate Igor Pro code from scratch.
Otherwise, to support your bold statement, for anyone needing to write Igor Pro code to do something, consider as a first step asking here for advice. Think of the IgorExchange forum as a free resource with an ability to function as a high-level AI tuned specifically to all nuances of Igor Pro.
February 6, 2026 at 07:08 pm - Permalink
Curious,
Have the tools progressed enough so the cost of making a Mac version of IP10 is now reasonable?
Andy
February 9, 2026 at 08:45 am - Permalink
I see both positive and negative aspects in this development. The last thing I would want is that this community is run into the ground by a flood of slop code, which unfortunately is straining quite a few other projects and communities these days. But I hope that the tools get good enough to be a genuine help, especially for new users such as students. I personally have become vastly better at writing Python (read: an evolution from being completely lost to being rather terrible at it) using AI.
Now, since I had VS code with a Github integration 'lying around' here, I tried the free & available options on my projects. I requested a code review and a list of mistakes from GPT-4.1, GPT-4o, GPT-5 and Claude Haiku 4.5. My experience was surprisingly positive. While GPT-4 missed the point most of the time, GPT-5 and Haiku gave some genuinely useful suggestions and even found some obscure typos and variable mismatches. I would say the latter two gave about 1/4 really helpful output, 1/4 general tips valid for any programming language, 1/4 which somewhat were nice suggestions but not really important for Igor (e.g., consistent capitalization of variable names), and 1/4 which were just off or wrong (e.g., suggesting that numtype(NaN) returns false or that I should rename the generic "GraphPopup" menu). So I was happy about the 1/4 which was usable, and which I probably would missed otherwise. Nothing of this were any impactful bugs or serious code rewrites as I had polished my projects already for over 10 years. I also did not ask any follow-up questions, which might have improved the situation. That's so far what I got from my limited tests.
February 9, 2026 at 07:25 pm - Permalink
In reply to Curious, Have the tools… by hegedus
No, and I wouldn't expect that to be the case in any relevant time frame. As https://www.wavemetrics.com/news/igor-and-apple-arm-processors mentions, there are serious roadblocks to an ARM port that are still not solved. We could potentially get around some of those by dropping big features from a macOS version of Igor, but it's hard to know whether doing that would leave us with a viable product, given the relatively small (but vocal!) share of IP9 users running on macOS.
One roadblock to using AI for serious Igor development is that AI models can only hold so much context at once. Igor's source code is voluminous, and not particularly well encapsulated. As an example, I asked Copilot a relatively simple question about our internal function that saves a packed experiment. After answering that question, GitHub Copilot showed that the context window was around 80% of the maximum allowed. I think we'd run into the context limit pretty quickly if we were having it write any serious code.
February 10, 2026 at 07:37 am - Permalink