I'd also recommend seeing how the USB protocol for your camera works. On mine (Sony A7R III) there are some relatively impactful limitations. One is that you seemingly can't shoot a burst directly over USB, but you can work around this with a shutter release cable. The other is that you can't change shutter speeds and ISOs super quickly, you can only increment/decrement them. The latter issue is fixed on newer models though.
- Posts
- 0
- Comments
- 109
- Joined
- 2 yr. ago
- Posts
- 0
- Comments
- 109
- Joined
- 2 yr. ago
I haven't done this exact type of thing but my understanding is that the radios and wireless chipsets of most cameras are pretty poor in comparison to what you can get with a phone or computer. USB tethering is probably the way to go to reduce bottlenecks. You could potentially tether to a phone (though idk what the software situation looks like) or possibly an ARM SBC, as some have LTE and 5g compatibility.
I don't have experience with the Fuji system but I would make sure you're appropriately budgeting for lenses. The common saying is "date the body, marry the lens," meaning spend more on getting good lenses than the body. You can hold onto good lenses for a long time and upgrade the body as needed, and good lenses can get super expensive, especially in wildlife.
The kit lens won't give you enough reach for most wildlife, and it's probably not the sharpest either.
But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.
I don't believe there's anything stopping AMD from re-implementing the CUDA APIs; In fact, I'm pretty sure this is exactly what HIP is for, even though it's not 100% automatic. AMD probably can't link against the CUDA libraries like cuDNN and cuBLAS, but I don't know that it would be useful to do that anyway since I'm fairly certain those libraries have GPU-specific optimizations. AMD makes their own replacements for them anyway.
IMO, the biggest annoyance with ROCm is that the consumer GPU support is very poor. On CUDA you can use any reasonably modern NVIDIA GPU and it will "just work." This means if you're a student, you have a reasonable chance of experimenting with compute libraries or even GPU programming if you have an NVIDIA card, but less so if you have an AMD card.
KStars
I'll add that KStars has a really powerful astrophotography suite called Ekos. It has lots of helpful automation features to make imaging relatively simple to setup.
I work in CV and I have to agree that AMD is kind of OK-ish at best there. The core DL libraries like torch will play nice with ROCm, but you don't have to look far to find third party libraries explicitly designed around CUDA or NVIDIA hardware in general. Some examples are the super popular OpenMMLab/mmcv framework, tiny-cuda-nn and nerfstudio for NeRFs, and Gaussian splatting. You could probably get these to work on ROCm with HIP but it's a lot more of a hassle than configuring them on CUDA.
I've tried Overture, Creality, and Inland (all black though, not transparent) and Overture printed the best for me (at least for functional parts where I cared about print quality and tolerances). Inland's PETG+ and High Speed PETG was even better though.
What I do have an issue with is new users that try and have problems and immediately start whinging that “FreeCAD isn’t like what I know. And it needs to be like my favorite” Those are the lazy people that can’t be bothered to learn something new. And they should either expend the effort to learn or go back to whatever they were using
I think that's fair, but most criticisms of FreeCAD from people coming from other CAD packages rather fall into your latter category that you mention here:
But if you have given FreeCAD, (or ANYTHING new in life), an honest try and you can’t get the hang of it or simply don’t like it.
I don't think we're actually disagreeing in principle, just on what we perceive as the common criticisms of FreeCAD. Normally, I've seen people from other CAD programs get frustrated at limitations within FreeCAD or needing to work around bugs in ways that slow them down. For example, FreeCAD previously was unable to cope with multiple geometries being contained in a single sketch (I believe 1.0 now supports multiple extrudes from different sketch regions, but previously FreeCAD would throw an error), which made modeling less efficient for those coming from programs like Solidworks where this feature exists. Throw other issues like toponaming into the mix and it's no surprise people from other CAD programs tried learning it, got frustrated (since their baseline was better than what FreeCAD could offer) and moved on.
I agree that criticizing FreeCAD for having different workflows than other CAD programs is a bit silly, though. I don't really care what the exact workflow is as long as it 1) works and 2) is fast, and for me FreeCAD 1.0 (and previously Realthunder's branch) ticks all the boxes there.
I appreciate the respectful discussion!
I do think the point about all CAD packages having failure paths is a little overblown. Yes, you can definitely get proprietary CAD to break but in my experience (at least with Solidworks and Fusion), it usually requires much more complex parts than FreeCAD parts. Post 1.0 the situation is definitely better though.
You're right that users should try following best practices from day one, but realistically most users are not going to learn everything correctly automatically. They might use an out of date tutorial, or might have just learned by tinkering themselves.
The point I was trying to make was that because FreeCAD operates differently than other CAD programs do to one another and because it's generally a bit more brittle and demanding of the user, I can't say I blame anyone for not wanting to switch to it if they already have a CAD program they're proficient with. You could call it being lazy, but from a practical standpoint there isn't necessarily a ton to gain for a relatively large amount of time investment required to be capable of using it.
I really hope FreeCAD improves enough one day in the new user experience department. I love the software and have been using it as my tool of choice for years now, but evidently not everyone thinks it's worth the time investment.
The main benefit I think is massive scalability. For instance, DOE scientists at Argonne National Laboratory are working on training a language model for scientific uses. This isn't something you can do on even 10s of GPUs for a few hours, like is common for jobs run in university clusters and similar. They're doing this by scaling up to use a large portion of ALCF Aurora, which is an Exascale supercomputer.
Basically, for certain problems you either need both the ability to run jobs on lots of hardware and the ability to run them for long (but not too long to limit other labs' work) periods of time. Big clusters like Aurora are helpful for that.
I'll mention this fix is aimed at mitigating toponaming primarily for sketch attachment. Some features still struggle with toponaming, namely chamfers and fillets. But in any case, it's a massive step forward and makes FreeCAD much easier to recommend! Until now I've been using Realthunder's fork since toponaming was such a headache to resolve manually.
I think that's a little unfair. The bigger issue IMO is that FreeCAD doesn't quite share the same workflow as other (proprierary) CAD packages, so someone coming from proprietary CAD also needs to unlearn habits that were previously fine but now potentially harmful. For example, adding chamfers and fillets in FreeCAD pretty much should only be done at the end to avoid toponaming issues, which is less of an issue in other packages.
Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.
I agree that perception models aren't great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I've seen on this topic usually produce much more convincing pointclouds.
I think it's been about a year? IIRC Intel only started using TSMC for their processors with Meteor Lake, which was released in late 2023.
I believe their discrete GPUs have been manufactured at TSMC for longer than that, though.
I use a lot of AI/DL-based tools in my personal life and hobbies. As a photographer, DL-based denoising means I can get better photos, especially in low light. DL-based deconvolution tools help to sharpen my astrophotos as well. The deep learning based subject tracking on my camera also helps me get more in focus shots of wildlife. As a birder, tools like Merlin BirdID's audio recognition and image classification methods are helpful when I encounter a bird I don't yet know how to identify.
I don't typically use GenAI (LLMs, diffusion models) in my personal life, but Microsoft Copilot does help me write visualization scripts for my research. I can never remember the right methods for visualization libraries in Python, and Copilot/ChatGPT do a pretty good job at that.
There is no "artificial intelligence" so there are no use cases. None of the examples in this thread show any actual intelligence.
There certainly is (narrow) artificial intelligence. The examples in this thread are almost all deep learning models, which fall under ML, which in turn falls under the field of AI. They're all artificial intelligence approaches, even if they aren't artificial general intelligence, which more closely aligns with what a layperson thinks of when they say AI.
The problem with your characterization (showing "actual intelligence") is that it's super subjective. Historically, being able to play Go and to a lesser extent Chess at a professional level was considered to require intelligence. Now that algorithms can play these games, folks (even those in the field) no longer think they require intelligence and shift the goal posts. The same was said about many CV tasks like classification and segmentation until modern methods became very accurate.
I work in CV and a lot of labs I've worked with use consumer cards for workstations. If you don't need the full 40+GB of VRAM you save a ton of money compared to the datacenter or workstation cards. A 4090 is approximately $1600 compared to $5000+ for an equivalently performing L40 (though with half the VRAM, obviously). The x090 series cards may be overpriced for gaming but they're actually excellent in terms of bang per buck in comparison to the alternatives for DL tasks.
AI has certainly produced revenue streams. Don't forget AI is not just generative AI. The computer vision in high end digital cameras is all deep learning based and gets people to buy the latest cameras, for an example.
Yeah there's a good chance you're right. Maybe something to do with memory management as well.
Long term I'll probably end up switching back to Darktable. I used it before and honestly it is quite good, but I currently have a free license for CC from my university and the AI denoise features in LR are pretty nice compared to the classical profiled denoise from Darktable. It does also help that the drivers for my SD card reader are less finicky on Windows so it's easier for me to quickly copy over images from my camera on there instead of Linux. Hopefully that also gets better over time!
I don't know exactly, but it's apparently a thing. Some game anti-cheat software such as Easy Anti-Cheat will give you an error message saying something along the lines of "Virtual machines are not supported." Some are easy to bypass by just tweaking your VM config, others not so much.
Not that unusual IMO, lots of people start their PhD directly after completing their Bachelor's. If they weren't born born in the first half of the year then they'll have completed their BS by 21 and start the PhD either at 21 or 22.