Hello, sorry that I haven't posted in a while I have finally found the time. I am now starting on this project and was looking for guidence on the setup.
I current have a Windows 7 desktop and am not trying to switch or dual boot to *nix, it is also the only machine I have that has an Nvdia GPU which I need so I can run the CUDA code. What is the recommended or what could anyone recommended I use to develop with Cura? I know it is a mix of Python and C++, and I would be working with the engine which is C++ so could I use Visual Studio 2013 (has the NSight CUDA Plugin) along with its Git integration?
Another idea I had is to use a virtual machine running Ubuntu, but I am not sure if VMWare can expose the, I guess, "raw GPU" to the VM? When I used "lspci | grep VGA" on a VM I have running I saw that the adapter was listed as something like "VMWare VGA Adapter" and not the graphics card I have. I know this is way off the forums expertise but might there be a way to use a VM running ubuntu?
----------------------------
As for the implementation I think I am going to aim for the simulated annealing/genetic algorithm approach to finding faster target model to g-code outputs. I had an AI class last semester that really interested me and I like the idea of GA's very much. Also, I don't have as much time as I would like so I think approaching it this way gives me less technical details to try to understand about how the actuall paths are determined, all I need to do is tweak the input to determine the best path times I can in a reasonable time.
I think/want the GA should/to specifically target finding the fastest estimated print times for a model. I would assume (sorry, research han't happened heavily yet) that the current software already estimates print times and therefore supplies a nice cost function for the GA (Any other ideas? Maybe the ratio of print time to print resolution/speed), thus I think I need to determine the parameters that will evolve over time.