Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)

  • 0 Posts
  • 142 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • Wayland and GPU stuff should be very good in endeavor, better than most systems I have seen, better than openSUSE leap and mint certainly. I don’t know fedora however.

    Endeavor has its own base repo, but also the regular arch stuff like aur. The AUR is probably the best source for all those programs that are usually missing in your repo, and since the base stuff is stable in endeavor there is no problem if some random program needs a special version or a manual install sometimes, it won’t affect anything else.
    The AUR is not the main package source for endeavor.
    I don’t know your hardware, but the combination of up to date system components, endeavors focus on just working, and all the shit in the aur (to my understanding flatpak is currently quite useless for drivers) sound like it should just accept any hardware at least as well as other linux distros.

    On a sidenote for flatpaks. There is this long running conflict between stability, portability, and security. The old-school package systems are designed to allow updating libraries systemwide, switching-in abi compatible replacements containing fixes. On the other hand, you have appimage, flatpak, …, which bring their own everything and will therefore keep running on old unsafe libraries sometimes for years before the developers of all those specific projects update their projects’ versions of all those libraries.











  • It is neither established nor at all how english works. You can’t just always call something by name only, this isn’t Japanese.

    I/you/they is established, same as I/you/he or I/you/she. English is designed around pronouns. Even if what you did was more common, it would still cause massive confusion and make the English language not work properly.

    If you wanna be innovative, use I/you/Drag or something, that might still work.
    Where only 3rd person pronouns are not used.
    Drag does not understand how languages are used, Drag should really think about practicality.
    I still think even that is too impractical. Pronouns are used to communicate that the subject or object has not changed, information you have to process out yourself if pronouns are missing.



  • which also references an effort to use the media to quietly disseminate Google’s point of view about unionized tech workplaces.

    Bogas’ order references an effort by Google executives, including corporate counsel Christina Latta, to “find a ‘respected voice to publish an op-ed outlining what a unionized tech workplace would look like,” and urging employees of Facebook, Microsoft, Amazon, and Google not to unionize.

    in an internal message Google human resources director Kara Silverstein told Latta that she liked the idea, “but that it should be done so that there ‘would be no fingerprints and not Google specific.’”

    From the article posted by 100_kg_90_de_belin.

    Google seemingly does care about their internal image, so they will only make their actions obvious when they fire you for bogus reasons after wanting to join a union.
    Quite nasty in that they give you no hints about how extreme their efforts on this are. They monitor internal employee tools like they are cosplaying the NSA, but you wouldn’t know before you are fired out of the blue.


  • You can compare total better than per user at these scales.
    Lemmy needs a certain amount of performance to keep up with federation, but once you have all the images and posts and comments you don’t need second versions until you scale to a size that mandates multiple machines. Which I would guess is more in the 6+ digit user range, where you start averaging requests per second not minute.

    In some sense, every lemmy user is a user of your instance via federation. You need to pay the performance for all 100k of us whether your instance has 10 or 10k of those. Local users are just a bit extra demanding on your hosting resources.

    I suspect the bias we see here with larger instances paying a bit more (50-ish instead of 10-ish) is more due to reliability and snappyness than actual performance needs too. You tend to get optional smaller-gains pricier perks you might not go for for a smaller instance.




  • Yeah, for amateurs it’ll be a while longer for this tech to become easily available.
    Though It is also fundamentally fixable, you can take the output of your sensor and apply the same sort of logic to it as professional large telescopes. The blocking spots will be larger since the telescope will not correct for atmospheric distortions and likely be in a less favorable spot, but still you can do far better than throwing out entire frames or even entire exposures.
    It is ofc a much much larger ask for hobby astronomers to deal with this initial wild-west software mess of figuring all of that out.

    As for the RF mess, this is the first time I hear of that. It seems honestly kinda odd to me, we have a lot of frequency control regulations globally and I have heard SpaceX go through the usual frequency allocation proceedings. A violation of that would be easy to show and should get them in serious trouble quickly. Do you have any source on that?


  • Maybe to add a bit of general context to this, I am not an astronomer but I work in an adjacent field. So I hear a lot of astronomers talk about their work both in private and public.
    You don’t really hear them talk about satellites often. What from what I gather really wrecks astronomy is light pollution, which has been doubling every few years for a while now and is basically caging optical astronomy to a select few areas.

    The worst thing for astronomy in the last century has probably, ironically, been the invention of the LED.

    The satellite streak thing is probably a minor point, where newspapers caught some justified ranting of astronomers and blew it way out of proportion.


  • Wrecking is not really the right term.
    It is causing work for astronomers, and wrecking very few older systems, but generally it is an issue you can work around. I.e. something temporary. What you usually see in my experience of the field is you have some of your work degraded by satellite streaks, which are about 2x more common since starlink, and you understandably complain at starlink. And then get around to coding up a solution to deal with the streaks, spend another few runs until it about works, and eventually forget this was ever a thing.

    In more detail, the base issue is, that you are taking an image, with probably minutes or hours or days of exposure, and every satellite passing through that image is going to create a streak that does not represent a star. Naturally that is not good in most cases.
    The classic approach here, because this issue has existed since before starlink satellites, is to - depending on frequency and exposure length and your methodology - either retake the entire shot, or throw out at least the frames with the satellite on it, manually.

    The updated approach is to use info about satellite positions to automatically block out the very small angle of the sky around them that their light can be scattered to by the atmosphere, and remove this before summing that frame into your final exposure. Depending on methodology, it might also be feasible to automatically throw away frames with any satellite on them, or you can count up which parts of the image were blocked for how long in total and append a tiny bit of exposure only to them at the end.

    To complicate this, I think more modern complaints are not about the permanent constellation satellites but those freshly deployed, that are still raising their orbits. Simply because their positions are not as easy to determine, since their orbits are changing. So you need to further adapt your system to specifically detect these chains of satellites and also block them out of your exposures.

    The issue here is that you need to create this system that deals with satellite data. And then you need that control over the frames in your exposure, which naturally does not match how exposure used to work in the olden days of film, but to my knowledge does work on all “modern” telescopes.
    My knowledge here is limited but I think this covers about 30-40 years of optical telescopes, which should largely be all optical ground based telescopes relevant today. Further, you probably do need to replace electronics in older telescopes, since they were not built to allow this selective blocking, only to interrupt the exposure.

    In summary, not affected are narrow fov modern optical telescopes, and in general telescopes operating far from visual frequencies.
    Affected with some extra work, would be some older narrow (but not very narrow) fov telescopes, as you now have to make them dodge satellites or turn off shortly, when previously you could have just thrown away the entire exposure in the rarer cases you caught a satellite. This would be software only (not that software is free).
    Modern wide fov telescopes might need hardware upgrades or just software upgrades to recover frames with streaks on them.
    Old wide fov telescopes may be taken out of commission or at least have their effective observation times cut shorter by needing to pass out on more and more exposure time over satellites in the frame.

    It is a problem, yes, but in my understanding one that can be overcome, and is causing the main annoyance and majority of its issues while the number of satellites is increasing, not after they have been increased.
    I don’t know of a single area of ground based astronomy that couldn’t be done with even a million satellites in leo.