The way it works on macOS is that you select the ‘looks like’ resolution to determine the size. For example if you have a 4k monitor you can set a ‘looks like’ resolution of 2560x1440. Internally it always renders at 2x, so in this case it will render to 5120x2880. That image is then scaled down to the actual display resolution, e.g. 3480x2160. It’s basically supersampling.
You could also do what they do in the Android world, let the phone run like crap all the time, then it won’t need to slow down because it was slow from day one.
Look, I get you hate apple and desperately want to find fault with everything they do. I agree they are a bunch of greedy bastards that try to squeeze as much money out of their customers as they can, but this just isn’t one of the ways they do it. In fact it’s the exact opposite: it ensures old devices remain usable for longer.
The fuckers sold us devices that worked perfectly for years until they sent firmware updates to slow them down
This slow-down only triggers after the device already had a brown-out. That is: it has to at least crash once due to a worn out battery.
“The brakes on my car worked fine for years and now they suddenly don’t work anymore”. Batteries are a consumable. They wear out. Phones were crashing due to it. They pushed an update that ensured the devices remained usable instead of crashing under load.
Could they have communicated it better? Yes. Was it the right solution from a technical point of view? Also yes.
It doesn’t get progressively slower over time, it’s either in degraded mode or it isn’t.
If you want to use a car analogy, it’s comparable to limp mode. When your car detects an engine problem it goes into limp mode in which you don’t have full performance but you can at least get home. You’d rather have your car not do this and risk damaging the engine, or would you prefer it to simply stop working and leave you stranded?
Batteries wear out, it’s an unfortunate property of our current battery tech. You can either let your phone get unstable (risking data loss), have it refuse to work at all, or let its run in reduced performance mode so it at least stays usable. Those are your options. Pick one.
Walk into Apple store, hand over phone, pick it up an hour later. Couldn’t be easier. Looking at prices, 3rd party repair services using non-original parts charge the same or more as Apple does.
That was not to get you to buy another phone, in fact the opposite. It was to keep your phone functional even though it had a worn out battery.
In phones there is this concept called a ‘race to idle’. Basically, you want your phone to do nothing, because doing nothing uses very little energy. So when you do something on your phone, the goal is to do it as quickly as possible so it can go back to doing nothing and save battery. Your phone will be in this low-power idle state 99.999% of the time. You still want your phone to be responsive though, when you click on something you want it to respond without delay. That means that when you tell it to do something it has to go from this low power state back to a high speed state.
Now, iOS is really aggressive in this, it ramps up the CPU speed really fast. As a result, the power draw of the CPU goes from almost nothing to a high power draw very quickly. This causes problems with old batteries. As a battery ages it not only loses capacity, but it also becomes slower to respond to changes in power draw. If the CPU needs a lot of power quickly and the battery can’t keep up you get a brownout (drop in voltage) and the phone basically crashes and reboots.
So what Apple has done is that when iOS detects this happening (i.e. a crash due to the battery being unable to keep up), it will ramp up the CPU a little slower. Or to use a car analogy: they don’t change the top speed, but are less aggressive on the gas so it takes a little longer to get to that top speed. If you replace the battery it goes back to the original behavior.
This is basically a good thing, the alternative is that your phone keeps crashing. Where they screwed up is that they failed to inform users of this.
macOS converts x86 code to ARM ahead of launching an app, and then caches the translation. It adds a tiny delay to the first time you launch an x86 app on ARM. It also does on-the-fly translation if needed, for applications that do code generation at runtime (such as scripting languages with JIT compilers).
The biggest difference is that Apple has added support for an x86-like strong memory model to their ARM chips. ARM has a weak memory model. Translating code written for a strong memory model to run on a CPU with a weak memory model absolutely kills performance (see my other comment above for details).
Any program written for the .net clr ought to just run out of the box.
Both of them?
There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta.
Except for the performance bit.
ARM processors use a weak memory model, whereas x86 use a strong memory model. Meaning that x86 guarantees actual order of writes to memory is the same as the order in which those writes executes, while ARM is allowed to re-order them.
Usually it doesn’t matter in which data is written to RAM, and allowing for re-ordering of writes can boost performance. When it does matter, a developer can insert a so-called memory barrier, this ensures all writes before the barrier are finished before the code continues.
However, since this is not necessary on x86 as all writes are ordered x86 code does not include these memory barrier instructions at the spots where write order actually matters. So when translating x86 code to ARM code, you have to assume write order always matters because you can’t tell the difference. This means inserting memory barriers after every write in the translated code. This absolutely kills performance.
Apple includes a special mode in their ARM chips, only used by Rosetta, that enables an x86-like strong memory model. This means Rosetta can translate x86 to ARM without inserting those performance-killing memory barriers. Unless Qualcomm added a similar mode (and AFAIK they did not) and Microsoft added support for it in their emulator, performance of translated x86 code is going to be nothing like that of Rosetta.
Signed, a disabled and unable to work guy who enjoys IT and programing
You don’t need to pay to develop an app, you only need to pay to put it in the store.
So develop your app. If it’s any good, pay the $100, sell it in the store and it’ll pay for itself. It may even make you a little profit. If it’s not good enough for that, why does it need to be in the store?
Say you you're maintaining a FOSS app on your own time. How interested would you be to pay Apple $100 annually for the privilege of giving their users free stuff?
Depends on the reason you’re maintaining that app to begin with. If it’s a hobby, then $100/year is a pretty cheap hobby.
Good for you you have so much disposable income. Many hobby devs such as myself aren’t so lucky
Go talk to some random people and ask them how much they spend on their hobbies, I bet you won't find many people who have a hobby that costs less than $100/year. It's a damn cheap hobby.
which is one reason why I don’t make Apple apps.
That's probably a good thing. I don't think we need more apps made by amateurs in the app store.
I said small, synonyms hobby, FOSS. It is an obstacle to be forced to pay money to Apple for the 'privelidge' of being able to install it on their devices.
How is the $100 a obstacle to any legitimate developer? The only one it hurts is those who would otherwise flood the app store with crap submitted from throw away developer accounts.
You assume incorrectly.
The way it works on macOS is that you select the ‘looks like’ resolution to determine the size. For example if you have a 4k monitor you can set a ‘looks like’ resolution of 2560x1440. Internally it always renders at 2x, so in this case it will render to 5120x2880. That image is then scaled down to the actual display resolution, e.g. 3480x2160. It’s basically supersampling.