

An “exponential drop” would be a drop that follow an exponential curve, but this doesn’t. What you mean is a “drop in the exponent”, which however doesn’t sound as nice.
An “exponential drop” would be a drop that follow an exponential curve, but this doesn’t. What you mean is a “drop in the exponent”, which however doesn’t sound as nice.
and not an exponential speed-up (O(2^n) to O(n): exponential to linear)
Note that you can also have an exponential speed-up when going from O(n) (or O(n^2) or other polynomial complexities) to O(log n). Of course that didn’t happen in this case.
I’m talking about compile time.
Start with all of the known safe cases (basic types should be fine), then move on to more dubious options (anything that supports iteration). Then allow iterable types but don’t allow iterating over a mutable reference. And so on. If it’s a priority to loosen up the rules without sacrificing safety, surely some solutions could be found to improve ergonomics.
If you want guaranteed safety then the borrowing rules are the most flexible as far as we know.
Just to give a couple of examples of how your idea might be flawed, what do you consider “basic types”? Are enums basic types? Then you’ve got an issue, because you might get a reference to the contents of an enum and then replace the enum with another variant, and suddently you’ve got a dangling reference. Would you prefer to prevent creating references to the contents of an enum? Then you’re more restricting than the borrowing rules.
Allowing iterable types but not iterating over mutable references is not enough for safety unfortunately. The basic example is getting a reference to an element of a Vec
and then calling push
on the Vec
. This seems very innocent, but push
ing on a Vec
might reallocate its backing buffer, making the previous reference dangling. Again, would you prevent taking references to elements of a Vec
? Then again you become much more restricting than the borrowing rules.
I really liked the idea of an optional, built-in GC w/ pre-1.0 Rust where specific references could be GC’d
That was just syntax sugar for Rc
/Arc
, and you can still use them in today’s Rust, albeit with slightly worse ergonomics (no autoclone for example).
Sure, but that doesn’t mean it can’t be better.
It’s surely interesting though that people continuously complain about them and then praise a language whose equivalent feature is much more restrictive!
Surely the compiler could delay optimizations until the entire project is built, no? Then it knows what implementations exist, and the developer could then decide how to deal with that
It’s not really about optimizations but rather:
when checking some impls for overlap, the compiler assumes that impls that the orphan rules block will never exist. You thus need to either disallow them (which would make the compiler more restrictive for non-application crates!) or a way to track them (which is easier said than done since coherence and trait checking is very complex)
when generating code where specialization and/or vtables are involved. This could be delayed until the last crate is compiled, but at the expense of longer compile times and worse incremental compilation performance.
Sure, and ideally those cases would be accounted for,
AFAIK there’s nothing yet that can account for them without allocating everything on the heap.
or at the very least the dev could annotate each use to turn the borrow checker off for each instance, and that could print something at build time and a linter could flag over it. Unsafe blocks aren’t feasible for everything here.
You want some annotations to break out of the safe subset of the language but aren’t unsafe
blocks basically that? Or perhaps you want something more ergonomic, at the expense of safety?
dreaded orphan rule
Yeah, that always struck me as stupid.
It is necessary to guarantee consistency in the trait system, otherwise it could lead to memory unsafety. Even relaxing it in cases where overlapping implementations could always be catched is still problematic because the compiler sometimes performs negative reasoning about traits that it know cannot be implemented downstream due to the the orphan rule.
And if you think about it the orphan rule is not worse than what other languages allow. For example C# only allow implementing an interface when defining a type, while the orphan rule also allows you to implement a trait when defining the trait itself. Not to mention being able to implement a trait only when a generic parameter implements another trait.
Yeah, the borrow checker is a bit too strict IMO. Ideally, the borrow checker would only trigger on things that could be run in parallel, such as with threads or async.
You can still trivially violate memory safety without multithreading or concurrency. The article touches on this a bit (they mention e.g. iterator invalidation) but they fail to address all issues.
https://manishearth.github.io/blog/2015/05/17/the-problem-with-shared-mutability/
While they don’t write it explicitly I think they’re looking for a good junior developer, given that:
they are not asking for Rust work experience, instead for good Rust knowledge and experience with open source development, both of which you can obtain on your own if you’re a competent student
57k€ is not a bad salary for a junior developer in Europe
the two founders have graduated recently (~3 years ago) and have been working on Typst since then (their master thesis was on creating Typst itself), so it’s likely they are looking for someone like them.
Linus got it right, it’s just that other userspace fundamental utilities didn’t.
If Italy really and truly doesn’t want a DNS server that is doing this to be accessible in Italy, go after Italian network service providers
They’re already doing that for blocking IPs, and ended up blocking Google Drive and some Cloudflare CDN IPs.
Where I live there are a lot of “temporary” 30km/h speed limits that were never removed by the road workers after the work was completed.
Just an arm and leave it with the battery, problem solved.
A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.
And then Discord arrived
The comparison with Discord makes non sense, the feature seems to be just a normal group chat, like the ones in Telegram/Whatsapp/iMessage. Discord’s killer feature is the ability to have multiple channels within a server, which allows more organization.
I hate all the cruft in my home directory, but I also hate when stuff suddently stop working after an update, or when all the documentation online talks about something that doesn’t work on my system or is not there anymore. Developers are the ones that will have to deal with people with these issues, so I can see why they are reluctant to implement the naive solutions that some ask for.
Not sure if Discord is the best example here, as it didn’t support the screen capture portal for a very long time.
I don’t get sports fanatics…
Most people just want to watch a match of their home town/favourite team maybe once a week. This is very moderate, what’s so bad about that? However in order to do that they either have to either spend an absurd amount of money to get access to all matches, or spend a bit less money to play lottery and hope the match they wanted to watch gets selected.
The only options you have are:
Dazn Standard (45€/month, 35€/month if you pay for 12 months) to get access to all the SerieA matches (and a whole bunch of other sports nobody cares about)
Dazn Goal Pass (20€/month, 14€/month if you pay for 12 months) to get access to 3 SerieA matches per week which you don’t get to choose (and a bunch of other sports nobody cares about)
Sky (16€/month for the first 18 months, then whatever Sky wants after that) to get access to 3 SerieA matches per week which you don’t get to choose (and a bunch of other stuff nobody cares about)
Most people care only about some specific matches, so your only option is Dazn.
Dazn is also a very crappy service, it often has connectivity problems and also has ads. Fun fact, if you get a connection issue while watching a Dazn ad, it will restart.
So, as usual, monopoly, high costs and crappy services drive piracy.
If you distribute your app via Flatpak, what benefit is there over “disk space” (irrelevant for all but embedded devices)
Everyone always focuses on disk space, but IMO the real issue is download size, especially when you update a bunch of flatpaks together.
I still prefer the upstream flatpaks over Fedora’s though.
You keep the recovery codes unexposed to the internet or obfuscated in some way, unlike your usual password.
How is a strong password I used exclusively for Bitwarden “exposed to the internet”? I do see the value of this for people that don’t care about security and reuse the same password everywhere. In that case you would need something like phishing to expose the 2FA code or the recovery code, just a leak of the email-password combination from another website would not be enough. But what’s the point if I’m already using a unique strong password specifically for Bitwarden?
Looks like the delay in 2011 was so big the data became available after the 2017 one