multiple includes of one file makes compilation slow

I am working on a mid-sized project that consists of a relatively large number of procedure files. To make each file independent, each file has include directives for each file it depends on. This situation results in some important files being included several times, which appears to confuse the Igor compiler sufficiently that the project takes about 10-20 seconds to compile on my Windows machine and crashes "out of memory" on my Mac laptop.

Each file is protected by include guards (#ifndef X #define X). Examining the dependency graph shows that there are no cycles, 21 files, and 33 includes (these numbers are only for this project's subgraph, so the full include graph is somewhat larger). Rearranging the graph so that each file is included only once (20 edges) speeds up compilation to normal performance, so I know the problem isn't too many files or problems in the source code itself. If anyone is really curious, the code with the rearranged include graph is at commit https://github.com/yamad/igorunit/commit/a9d47aef6bc. You can look at the parent commit for the original dependency graph. (Note that if you want to run the code, you also need the package at https://github.com/yamad/igorutils).

Can anyone shed any light on this situation so that I don't keep running into this problem. What's a recommended way to organize code given the implementation of the #include feature? The obvious workaround is to include each file only once, but managing dependencies this way is annoying, error-prone, and makes files more inter-dependent than they need to be. It appears that the compiler is doing far too much work on files that are already loaded, so maybe this is a bug? Thanks for your insight!
Out of interest/curiosity, I examined the full dependency graph to see what the numbers look like. As I noted above, for the project subgraph itself, there are 21 nodes (files) and 33 edges (includes). For the full graph, there are 22 nodes and 46 edges.
I looked at the code and I don't see an explanation for why it is slow.

I tried including the same file 100 times and took a few seconds on both Macintosh and Windows.

At any rate, I recommend that you have one main file that has all of your #includes. This will solve the slowness problem and also makes it much easier to understand what files your package requires.


Thanks for looking into this. After some more playing around, I think the issue is more about complicated dependency graphs than simply including a file multiple times. For instance, if B depends on C, and A depends on B and C, then C gets included twice. With many of these sorts of relationships in the graph, Igor seems to get a bit confused.

Of course, one obvious response to this problem is to get rid of the direct relationship between A and C. However, B may not always depend on C, and if A uses facilities of C directly then that relationship should be reflected in file A.

hrodstein wrote:
I recommend that you have one main file that has all of your #includes. This will solve the slowness problem and also makes it much easier to understand what files your package requires.


Sure, but this only works when the package works as a unit. In my case, I have a bunch of files that are intended as reusable "library" modules that can work independently. A client file loads only the libraries it needs. Some of the libraries are implemented by using facilities in lower-level libraries, but the client code shouldn't need to worry about this implementation detail.