• 1 Post
  • 59 Comments
Joined 3 years ago
cake
Cake day: August 6th, 2023

help-circle






  • Since noone went into details on it yet,
    LCDs are already transparent. They filter light, and usually their back is simply lit uniformly white. Instead you can use them freestanding and get a pane of glass you can selectively darken. This is sometimes used in custom pc cases to show info on a glass sidepanel.

    Unfortunately the way LCDs work means they always darken by 50% at the brightest, much more if you add color filters to escape b/w hell. If eink develops further and matches lcds in speed, it is probably possible to change the materials in one to make the pigment not block light when it is on one side.

    As for getting brighter, that is on the edge of viable, since we are just short of microled screens. There are already larger screens using that technology, and if you really wanted to you could make small screens too, it would just be really expensive and manual. Viable mass production is still in development, current methods have too many dead pixels.

    So within a few years and with some development to adjust eink to the purpose, it should be possible to get a pretty transparent pane that can become opaque and specular or matte in any color and saturation and brightness, and also emit light at will on its surface.

    The driver electronics should be very shrinkable and could probably be made small enough you literally couldn’t see them. The limit is probably gonna be the power source, which will likely have to wait for some far off material science magic like graphene capacitor batteries that manages to make it transparent so you can stick it inside the pane of glass.




  • In those cases it’s less painfull to use a website to extract the transcript and read that.
    You can skim around text way easier than a video.


    TLDR: ddr ram refreshes itself, making cpus freeze sometimes when reading ram. High speed traders don’t want that so they figure out ways to make data live with two copies on two different portions of ram that freeze at different times. This is impractical for normal programs. Most of the effort is spent on working around multiple abstraction layers, where the os and then the ram itself changes where specifically data goes.

    Every 3.9 microsconds, your RAM goes blind. Your RAM physically has to shut down to recharge.
    This lockout is defined by the Jedex spec as TRFC or refresh cycle time. Now, a regular read on DDR5 might take you like 80 nanoseconds. But if you happen to accidentally get caught by this lockout, that’s going to bump you up to about 400 nanoseconds.

    Think for a second. What industry might really care about wasting a couple hundred nanconds where one incorrectly timed stall would cost you millions of dollars? That’s right, the world of highfrequency trading.

    [custom benchmark program on ddr4 ram and 2.65GHz cpu:] When you plot the gaps between the slow reads, they’re all the same, 7.82 microsconds [20,720 cycles] apart every single time. […] So, the question is, if this is so periodic, can we potentially predict when the refresh cycle is going to happen and then try to read around it?

    See, it’s not like the whole stick of RAM gets locked when the refresh cycle happens. It’s a lot more granular than that. With DDR4, for example, the refresh happens at the rank level. And then DDR5 gets even more complicated where you can like subsection down even further than that.

    The memory controller does what’s called opportunistic refresh scheduling, which basically means that it can postpone up to eight refreshes and then catch up later if we happen to be in a busy period. […] how the heck are you going to predict opportunistic refresh scheduling?

    Then stuff about virtual memory management in modern OSs

    And I take two copies of my data and I space them nicely 128 bytes apart. And I’m feeling pretty good about myself, but for all I know, it could be straddling a page boundary and then the OS could have decided to put them wherever it felt like putting them.

    physical ram address issues:

    So the exor [XOR?] hashing phase kind of acts like a load balancer baked like directly into the silicon itself. Takes in your physical address, does a little bit of scrambling, and tries to spread it out evenly across all of the banks and channels.

    This also helps with rowhammer attacks where writing close to a physical address lets you write to that other address.

    So, DRAM [XOR] hashing strategies were already not documented publicly. But then after the entire rowhammer thing, obviously, there was even less incentive to publish these load balancing math strategies publicly.

    If AMD and Intel documented this kind of stuff, they’d kind of be like locking themselves into a strategy because customers would start to build against it. And then next year when it comes around, it’s really going to make your life difficult because you’re not going to be able to change things nearly as easily. But if you just don’t document it, well, who’s going to complain? only weirdos doing crazy stuff like me.

    Inside of your CPU, right next to the memory controllers, there’s actually tiny little hardware counters, one for every channel. […] If we do a simple pseudo [sudo] modprobe AMD uncore, it reveals those hardware counters to the standard Linux Perf tool. […] If I write a tight loop of code that constantly flushes the cache and hammers one particular memory address, then that means one counter should start to light up. And theoretically, this should tell us exactly what channel that our data is living on.

    Can’t really tell what’s going on here. Well, that, my friend, is OS noise. […] The problem is these counters are pretty dumb. So you can’t tell it only count the reads from this particular process. […] All we need to do is run it 50,000 times. […] See that spike? Super cool. And now I really know where my data lives.

    So, to me, I don’t really care which channel I’m ending up on, whether that’s channel 3, channel 7, whatever, doesn’t matter to me. All I need to do is make sure I’m ending up on different channels. […] The mathematical answer is that XOR is linear over GF2 which is actually really simple. Basically that means that no matter what scrambling the memory controller does, flipping a base bit will always flip the output no matter how many things are chained together.

    Goes on to write low latency benchmarks which show lower latency.









  • Redjard@reddthat.comtoScience Memes@mander.xyzTurbines are our friends
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    12 days ago

    Lots of huts probably have an ac or heater. This could all be the same device, at which point it’d definitely be easier than running the pipes for water and maintaining pumps and a dedicated tank.
    Don’t see a reason you couldn’t have a simple ac window unit that also has a warm water port, which you plug a single cable into going straight to your pannels on the roof.

    Edit: And once batteries are more affordable (or if you have a few grand to burn) you can then plug in a battery pack conveniently on the indoors side of your window unit.
    The indoors side can just have a few regular outlets you can extension cord around to where you need them.



  • Edit Edit: Yeah you are definitely right, it doesn’t show the updated votes, the order doesn’t change. I was probably seeing new posts mixed in.

    Edit: Actually you may be right, I’ll have to wait some more for testing.

    That sounds odd. The cases I saw definitely had updated scores. I usually read all 400+ ones yet they clearly have new ones that reached 400 since I read.
    And restarting the app changes the sort order compared to before, while the votes stay the same.

    The vote numbers may be getting fetched in the way the app updates old info when feeds have been open for a while, but the sort order is definitely wrong.

    Pulldown refresh and exiting and reopening the feed both don’t fix it.