6 minutes ago, SensataPDS said:In thinking about this camera as more than a viewing feature it may also be useful to check if the build platform is empty before starting?
That one wouldn't be too hard for a computer to do, unless what you've got on the build platform is the same colour as the build platform. But it can have an easy reference as to where the platform should be and throw an error if that area isn't a consistent colour.
- 1
Recommended Posts
Slashee_the_Cow 272
It's nowhere near as simple as it sounds. A single RGB camera cannot detect depth, which, while not a fatal problem, does make things a lot harder. It would be extremely computationally expensive (and still really computationally expensive if you had a depth camera).
The computer or printer would need to produce a photorealistic rendering for every single layer, taking into account the angle of the camera (if it can both be adjusted minutely enough and its angle reported with high accuracy). You'd then need to use AI to compare the images to make sure they matched. It couldn't be just a simple match because thanks to varying lighting conditions, any slight changes in the colour of the filament, and various other things, up to and including atmospheric conditions (effect is small at this scale but still present) the odds you'd get an exact match between a rendering and a picture from the camera would be zero.
Even if you want to dumb down the system to look for obvious, glaring errors, you'd still need to render the outline of the object at each layer (or every few layers), bearing in mind the angle of the camera, which would still likely require some (computationally expensive) raytracing to be done. You can then calculate a boundary for where it should expect to see the model, and given a small margin of error, look for anything the same colour as the filament (within a larger margin of error, thanks to things like lighting conditions; also it wouldn't work if the filament was similar in colour to the inside of the printer) outside that boundary, and then have a factor set as to whether that's a "could be a string or a small bit of support fell off" detection or "yep, the model's hosed".
Then there's the effects on the prints: If you don't move the print head out of the way, your view is blocked and you might not see errors. You'd also have to factor that into the calculations and internal renderings. If you do move the print head, that would require the filament to be completely retracted, without leaving any strings, but with some types of filament, the time it would take to move it out of the way, take a photo and then back, without heat being applied from new filament being laid, the model could warp. Even with something not very susceptible to warping (like PLA), you'd have far more obvious Z seams just because it would have to completely purge the nozzle before it moved the head away from the model to avoid creating strings, and then fully prime the nozzle once it gets back into position so that it can make a smooth movement, which would probably result in a small blob of filament.
Link to post
Share on other sites
SensataPDS 2
Thanks for your reply, i understand it would have be built for this in mind rather than just a camera. In thinking about this camera as more than a viewing feature it may also be useful to check if the build platform is empty before starting?
Link to post
Share on other sites