A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 2 Posts
  • 231 Comments
Joined 10 months ago
cake
Cake day: June 25th, 2024

help-circle
  • It’s a long article. But I’m not sure about the claims. Will we get more efficient computers that work like a brain? I’d say that’s scifi. Will we get artificial general intelligence? Current LLMs don’t look like they’re able to fully achieve that. And how would AI continuously learn? That’s an entirely unsolved problem at the scale of LLMs. And if we ask if computer science is science… Why compare it to engineering? I found it’s much more aligned with maths at university level…

    I’m not sure. I didn’t read the entire essay. It sounds to me like it isn’t really based on reality. But LLMs are certainly challenging our definition of intelligence.

    Edit: And are the history lessons in the text correct? Why do they say a Turing machine is a imaginary concept (which is correct), then say ENIAC became the first one, but then maybe not? Did we invent the binary computation because of reliability issues with vacuum tubes? This is the first time I read that and I highly doubt it. The entire text just looks like a fever dream to me.


  • Yeah, seeking support is notoriously difficult. Everyone working in IT knows this. I feel with open-source, it’s more the projects which aren’t in a classic Free Software domain, who attract beggars. For example the atmosphere of a Github page of a Linux tool will have a completely different atmosphere than a fancy AI tool or addon to some consumer device or service. I see a lot of spam there and demanding tone. While with a lot of more niche projects, people are patient, ask good questions and in return the devs are nice. And people use the thumbsup emoji instead of pinging everyone with a comment…

    I feel, though… I you’re part of an open source project which doesn’t welcome contributions and doesn’t want to discuss arbitrary user needs and wants, you should make that clear. I mean Free Software is kind of the default in some domains. If you don’t want that as a developer, just add a paragraph of text somewhere prominently, detailing how questions and requests are or aren’t welcome. I as a user can’t always tell if discussing my questions is a welcome thing and whether this software is supposed to cater for my needs. Unless the project tells me somehow. That also doesn’t help with the beggars… But it will help people like me not to waste everyone’s time.




  • Last time I checked, Waydroid was one of the more common ways to launch Android apps on Linux. I mean you can’t just package the bare app file, since you need all the runtime and graphical environment of Android. Plus an app could include machine code for a different architecture than a desktop computer. So either you use some layer like Waydroid, or bundle this together with some app in a Linux package…

    Android includes lots of things more than just a Linux kernel. An app could request access to your GPS, or to your contacts or calendar or storage. And that’s not part of Linux. In fact not even asking to run something in the background or opening a window is something that translates to Linux. An Android app can do none of that unless the framework to deal with it is in place. That’s why we need emulation or translation layers.








  • This topic always gets strong opinions on Lemmy. The truth with security is: it always depends a lot on what you’re doing and fighting against, i.e. the threat vectors. There probably are some edge cases where it’s better to have physical control over the server. And there will be other cases where it’s better to use an established solution.

    Just keep in mind, the people over at the good companies do this as a job. They probably have years of experience. Had long meetings to discuss technicalities and what might happen and how to handle it. They’ve analyzed the threat vectors and put some thought into the exact setup. And they likely constantly improve it. You need to judge by yourself if you can do it as good as them. And you obviously don’t want to make any major mistakes.

    There are several all-in-one mail solutions available. I don’t know which can do encrypt at rest. Stalwart can do it. There is autocrypt.org and some Dovecot plugins, so I guess everyone can do it.

    I like selfhosting and having control. What I host probably isn’t perfectly secure, though. Since I don’t spend all my time doing it and I also haven’t had anyone else look at the config and check for potential problems. E-Mail is one of the more complicated things. Due to abuse and spam, a bazillion things got added on top of the original protocol and the other providers are relatively strict with flagging mails as spam or straigt refusing to accept them. So there are lots of things to do, and get right. Even without encryption. And usually the needed ports are blocked on residential internet connections.

    (And ultimately, your house also is under some jurisdiction, so if you’re worried about your own government, they can come raid your house and take your server. Or bug your phone and laptop. So you need additional security like encryption. And means to ensure they can’t circumvent it. And temper-proof devices.)








  • Yeah, that just depends on what you’re trying to achieve. Depending on what kind of AI workload you have, you can scale it across 4 GPUs. Or it’ll become super slow if it needs to transfer a lot of data between these GPUs. And depending on what kinds of maths is involved, a Pascal generation GPU might be perfectly fine, or it’ll lack support for some of the operations involved. So yes, of course you can build that rig. Whether it’s going to be useful in your scenario is a different question. But I’d argue, if you need 96GB of VRAM for more than just the sake of it, you should be able to tell… I’ve seen people discuss these rigs with several P40 or similar, on Reddit and in some forums and Github discussions of the software involved. You might just have to do some research and find out if your AI inference framework and the model does well on specific hardware.