Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a RTX 2070 that's under-utilised, partly because I'm surprisingly finding it hard to understand C, C++ and CUDA by extension.

I'm self-taught, and have been using web languages and some python, before learning Rust. I hope that NVIDIA can dedicate some resources to creating high-quality bindings to the C API for Rust, even if in the next 1-2 years.

Perhaps being able to use a systems language that's been easy for me coming from TypeScript and Kotlin, could inspire me to take baby steps with CUDA, without worrying about understanding C.

I like the CUDA.jl package, and once I make time to learn Julia, I would love to try that out. From this article about the Python library, I'm still left knowing very little about "how can I parallelise this function".



A nice thing of the proper ALGOL linage systems programming languages (which C only has basic influence), is that you can write nice high level code and only deal with pointers and raw pointer stuff when actually needed, think Ada, Modula-2, Object Pascal kind of languages.

So something like CUDA Rust would be nice to have.

By the way, D already supports CUDA,

https://dlang.org/blog/2017/07/17/dcompute-gpgpu-with-native...


CUDA Ada would be so, so nice. Especially with non-aliasing guarantees from SPARK...



Yes, I want the integration to be tighter. In fact I'd really like to be able to target Ada kernels to cuda, ispc, mlir, spirv... And also have access to deep APIs for each platform. Now that gnatllvm is getting stable(r), there's a lot of opportunities opening in the Ada/SPARK world. KLEE would also be fun there.

I really wasn't a fan of the 'parallel' loop construct foreseen in Ada2020: in addition to having a 'bad' syntax ('how' instead of 'what') wasn't really well integrated in the 'control your tasking precisely' mentality that Ada provided. Having something a bit more platform-specific but still somehow portable if designed well would fit the Ada spirit far closer.


And I thought the selling point of AdaCore to NVIDIA was more SPARK for firmware & embedded than 'classic' Ada. It might have gone further since but it was already a huge jump for such a big tech company, one I can only applaud, when you see how much firmware hacking is just memory unsafety and UB exploits...


Name it CRUST


+1 Would love to see official support for CUDA for Rust.


> I have a RTX 2070 that's under-utilised

I've found that there are really good and beginner-friendly Blender tutorials. Both free and paid ones.


If you are a looking to maximize use of that card, you can make about $5 a day mining crypto with the 2070.


No, the high electricity cost in my country + the noise pollution in the house + how much I generally earn from the machine + my views on burning the world speculatively, discourage me from mining crypto.

Perhaps my position might change in future, but for now, I'd probably rather make the GPU accessible to those open-source distributed grids that train chess engines or compute deep-space related thingies :)


I am not convinced that training AI to win at chess is any more moral than mining crypto. And the block chain is about as open-source as you can get.


Sure it is. With a chess AI you’re driving forward progress in neural networks, machine learning, and technology in general which pushes forward humanity in ways that are too numerous to count (even if you don’t buy into AI hype). With mining crypto, you’re hashing a bunch of things against a bunch of other things to make some imaginary things that people only assign value to because other people assign value to, in a tautology, ad infinitum. It’s a scourge on the environment and a waste of great minds, to the extent that those minds are devising new crypto, not to the extent that they’re buying a mining rig to mine crypto.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: