Skip Navigation

Posts
7
Comments
3177
Joined
2 yr. ago

  • it's not a slur, it's a signal that that the poster types like they're on reddit.

  • we're comparing it to a system where none of that has been done. it's sort of a "god of the gaps" situation but the gaps are shaped exactly like pieces in a puzzle. we can extrapolate the form of the proof even if we can't show it. the same is not true of the other camp.

  • a hypothesis based on established facts is no longer belief but extrapolation.

  • the atheist says "i will not believe". the agnostic says "i can not believe". one is as dogmatic as the beliefs they purport to refute, the other lacks the capacity for dogma, as belief for them is simply not possible.

  • no

  • i don't believe in wifi, just like i don't believe in trees. i know they're there. that requires no belief.

  • it's not vlad's fault russian windows are so thin

  • data goes in, data goes out. you can't explain that.

  • "believe in wi-fi"

  • the soviet union flag

  • I don’t know where you’re getting thsi from

    probably from 3g

  • not per cell. per tower, sure, but a tower contains tens of individual cells. i've been to places where there are just two or three cells on days where there are a couple of hundred people and the signal just craps out.

  • you can get it to like 50m actually. not reliably, and absolutely not in that kind of environment, but it is possible.

  • there's also MIMO, which lets one device relay a 5G signal for another. i don't think most people appreciate losing battery to route someone elses call though.

  • without reading the article: it's because every device connected to a cell halves the bandwidth available to each device, and after something like 30-40 devices the cell is full. so it hands over to another available cell, which may also be full. which means that if you're in a crowd of thousands, your phone desperately bounces off of every cell it can see and keeps getting rejected, boosting its signal every time to get farther away.

    this is why we have mobile cell towers for events.

  • openai actually released some figures on power use per prompt, but the caveat is that a single prompt to their services can trigger multiple responses (the "thinking" mode) so they're not consistent.

  • while this is true in isolation, the amount of users means that inference now uses more power than training for the large actors.

  • sure, hardware wattage × time taken per prompt. which model specifically are you referring to and on what hardware?

    Edit:

    say, for example, that i'm running a model that takes ten seconds to respond on my Radeon 7900 XTX. it's power limited to 300W, but the rest of the system also pulls power during runtime so let's call it 400.

    to get watt-hours we take watts times hours. one second is 1/3600th of an hour.

    that comes out to 400 × 10 ÷ 3600 ≈ 1.11Wh. so that's equivalent of leaving a 6W LED light on for about 11 minutes, or an old-fashioned incandescent bulb on for 80 seconds.

  • i appreciate you violet. you're unhinged in the fun way.