Hum , interesting point. If you are a hacker, would you not prefer software to be spread out everywhere so people would be even more confused what is the real source for some application?
I guess people would then just depend on their search engine
Hum , interesting point. If you are a hacker, would you not prefer software to be spread out everywhere so people would be even more confused what is the real source for some application?
I guess people would then just depend on their search engine
Well, in principe I do not see that much different between 'curl | bash', 'sudo apt-get install' or installing an app on your phone. In the end, it all depends on trust.
Considering how complex software has become and on how many libraries from all over the internet any application that does more then 'hello world' depend, I do not see how you can do if you are not prepared to put blind trust into some things.
Concerning CrowdStrike, I am just reading an book on human behaviour (very interesting for everybody who is interested in cybersecurity), and I am just on the chapter about the fear of deciding with unknown parameters vs. the fear of not deciding at all. Any piece of software will brake at some point, so will you wait forever to find something that will not have any vulnerabilities?
Obtainium seems to have a very interesting take on this. Thanks for the link! I will check it out 👍
The problem is here is this: how is a user supposted to know if the official website of an application is organicmaps.app, organic-maps.app, organicmaps.org or github.com/organicmaps?
And even if she/he knows, hackers do ways to make you look the other way. The funny thing in this case is that the original author complained that the app was removed from google playstore, and did so on the fosstodon mastodon-server. Although I guess this was not at planned, he made the almost perfect social-engineering post. :-)
Hi,
Just to put things into perspective.
Well, this example dates from some years ago, before LLMs and ChatGPT. But I agree that the principle is the same. (an that was exactly my point).
If you analyse this. The error the person made was that he assumed an arduino to be like a PC, .. while it is not. An arduino is a microcontroller. The difference is that a microcontroller has resources that are limited: pins, hardware interrups, timers, .. An addition, pins can be reconfigured for different functions (GPIO, UART, SPI, I2C, PWM, ...) Also, a microcontroller of the arduino-class does not run a RTOS, so is coded in "baremetal". And as there is no operating-system that does resource-management for you, you have to do it the application.
And that was the problem: Although resource-management is responsability of the application-programmer, the arduino environment has largly pushed that off the libraries. The libraries configure the ports in the correct mode, set up timers and interrupts, configure I/O devices, ...And in the end, this is where things went wrong. So, in essence, what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)
That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.
I am not saying LLMs are by definition bad. I am just careful about the assumptions they can create.
As a sidenote. This reminds me of a discussion I haver every so often on "tools that make things to easy".
There is something I call "the arduino effect:. People who write code for things, based on example-code they find left and right, and all kind of libraries they mix together. It all works .. for as long as it works. The problem is what happens if things do not work.
I once helped out somebody who had an issue with a simple project: he: "I don't understand it. I have this sensor, and this library.. and it works. Then I have this 433 MHz radio-module with that library and that also works. But when I use them together. It doesn't work"| me: what have you tried? he: well, looked at the libraries. They all are all. Reinstalled all the software. It's that neither me: could it be that these two boards use the same hardware interrupt or the same timer he: the what ???
I see simular issues with other platforms. GNU Radio is a another nice example. People mix blocks without knowing what exactly they do.
As said, this is all very nice, as long as it works
I wonder if programming-code generated by LLMs will not result in the same kind of problems. people who do not have the background knowledge needed to troubleshoot issues once problems become more complex.
(Just a thought / question .. not an assumpion)
To be honest, I have no personal experience with LLM (kind of boring, if you ask me). I know do have two collegues at work who tried them. One -who has very basic coding skills (dixit himself) - is very happy. The other -who has much more coding experience- says that his test show they are only good at very basic problems. Once things become more complex, they fail very quickly.
I just fear that, the result could be that -if LLMs can be used to provide same code of any project- open-source project will spend even less time writing documentation ("the boring work")
Hmmm .. 🤔 The best way not to make friends with somebody with over 30 years of coding experience: suggest him to use ChatGPT to write a computerprogram 🤣🤣
Wauw! So many answers in such a short time. Thanks all! 👍 (I will not spam the channel by sending a thank you to all but this is really greatly apriciated)
Concerning ncurses. I did hear of it but never looked at it myself. What is not completely clear for me. I know you can use it for 'low-level' things, but does it also include 'high-level' concepts like windows, input fields and so?
The blog mentioned in one of the other posts only shows low-level things.
Yes, that's a very useful idea. Thanks!
If you get your domain from OVH, you get one single mailbox (be it with a lot of aliases, like a different email-address for every service/website you use) for free.
What is your 'deleted files' policy? How long do you keep them? I had a similar issue but then found out that the nextcloud cron-process wasn't running so files in the 'deleted files' folder where never really deleted.
Well, based on advice of Samsy, take a backup of home-server network to a NAS on your home-network. (I do home that your server-segment and your home-segment are two seperated networks, no?) Or better, set up your NAS at a friend's house (and require MFA or a hardware security-key to access it remotely)
What was that saying again?
"the biggest thread to the safety and cybersecurity of the citizens of a country ... are managers who think that cybersecurity is just a number on an exellsheet"
(I don't know where I read this, but I think it really hits the nail on the head)
I have been thinking the same thing.
I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).
Perhaps a script on a remote machine that initiate a ssh to the server and does a "s3cmd cp" with the keys entered from stdin ? Sofar, I have not found how to do this.
Does anybody know if this is possible?
Yes. Fair point.
On the other hand, most of the disaster senarios you mention are solved by geographic redundancy: set up your backup // DRS storage in a datacenter far away from the primary service. A scenario where all services,in all datacenters managed by a could-provider are impacted is probably new.
It is something that, considering the current geopolical situation we are now it, -and that I assume will only become worse- that we should better keep in the back of our mind.
I will put "multicloud" on my wishlist.
Looking at it from a infosec point of view, cloud-providers are an ideal target. All the customers who have just lost all their data now complaining to the cloud-provider are the ideal pressure-mechanism to get the cloud-provider to pay out.
In this case, it is not you -as a customer- that gets hacked, but it was the cloud-company itself. The randomware-gang encrypted the disks on server level, which impacted all the customers on every server of the cloud-provider.
The issue is not cloud vs self-hosted. The question is "who has technical control over all the servers involved". If you would home-host a server and have a backup of that a network of your friend, if your username / password pops up on a infostealer-website, you will be equaly in problem!
A URL 'Free up to some-end-date'. ???
Phishing link? 🤔