![](https://lemmy.dbzer0.com/pictrs/image/cf061b83-a216-4b7c-8434-8924a3a71f14.png)
![](https://lemmy.ml/pictrs/image/h1ChnLuBHr.png)
.software seems quite expensive (and long to type).
njalla offers registering .software domains, so I would recommend them.
Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)
.software seems quite expensive (and long to type).
njalla offers registering .software domains, so I would recommend them.
Feel like it’s less about the masonry and more about how Brandwand sounds quite silly
Needs a none option for base
They probably mean forwardable ports, i.e. have isp-related problems with hosting servers.
Syncthing doesn’t need you to host a server, it can hole-punch right through the worst cgnats.
Might also be intermittent syncing only when ops machine is running.
Fair point, though there are ways to change the probabilities of fusion paths, just not ever fully to 0.
Reaction probabilities scale with reactant concentration and temperature in ways we can exploit.
I tried to find some numbers on the relative probabilities and fusion chains, and ran into The helium bubble: Prospects for 3He-fuelled nuclear fusion (2021) which I hope is a credible source.
This paper contains a figure, which gives numbers to the fusion preferences you mentioned.
Paraphrasing the paper in chapter “Technical feasibility of D-3He fusion” here, first we see that up to 2 billion K, the discrepancy between ²H-³He and ²H-²H fusion grows, up to about 10x. ²H-²H reactions will either produce a ¹n (neutron) and a ³He, or produce an ¹H and an ³H, with the ³H then (effectively) immediately undergoing the much more reactive ²H-³H producing a neutron too.
In addition to picking an ideal temperature (2GK), we can also further, for the price of less than a factor 2 increase of pressure, use a 10:90 mixture of ²H:³He, or even more. This will proportionally make the ²H-²H branch a factor 10/90 ≈ 11% as likely as the ²H-³He correcting for reaction crossection.
Past that, reactivity goes about with the square of pressure and the inverse of ²H concentration, so another 10x in fusion plasma pressure would net another 100x decrease in neutron emission at equal energy output.
Given how quickly fusion reactivity rises with better fusion devices, we can probably expect to work with much higher concentrations than 10:90 when the technology matures, but 10:90 at 2GK would still have about 1/100ᵗʰ the neutrons per reaction and less than 1/100ᵗʰ per energy produced compared to fully neutronic fusion like ²H-³H.
The problem is solvable, but there is definitely a potential for taking shortcuts and performing ²H-³He with much higher neutron emissions.
It’s nlt mentioned in the text very clearly, but look at the link.
They were confused about what I said for hydro and wind, which I have now rewritten.
Kind of, it’s more complicated.
There are different fusion reactions, one example would be ²H-³He fusion used by Helion.
²H-³He is aneutronic, so doesn’t produce chargeless particles (every clump of stuff is either an electron or contains a proton). It is also an easy to achieve fusion reaction with good energy yield, with the downside that we don’t have ³He. Helion therefore has to split their fusion into two steps, producing ³He via ²H-²H fusion in a breeder-reactor and then fusing it in their energy-reactor. The first step would then emit neutrons and not really produce energy, the neutrons here could be used to further breed fuels.
Not having neutron emissions is quite useful because it allows you to make your fusion generator a lot smaller and safer around people, so it’s certainly something you want to avoid for far more valuable reasons than improving efficiency.
If we get very good with fusion we could also use the much harder to achieve ¹H-¹¹B reaction, which produces some neutrons but at very low energy (0.1% of total energy output), and is effectively aneutronic for safety concerns (neutrons have low penetration power and don’t really activate material, so can’t be used to breed say weapons-grade fission material). ¹H and ¹¹B are common so require no further steps to produce them.
There might still be directly-to-electricity pinch-fusion approaches that use neutronic fusion, I tried looking for any but didn’t find an example. We’ll see what ends up being done in practice, but close to 100% energy utilization is at least possible using pinch-fusion.
On the other hand, the losses in heat-conversion are inevitably huge. The higher the temperature of the heated fluid compared to the environment the higher the efficiency, but given that our environment has like 300 K we can’t really escape losing significant amounts of our energy even if we use liquid metal (like general fusion) and manage to get up to 1000 K. The losses of going through heat are <environment temperature>/<internal temperature> (carnot efficiency), so would still amount to 30% energy loss if we manage to use 1000K liquid metal or supercritical steam to capture the fusion energy and drive a turbine. In practice supercritical steam turbines as used in nuclear plants hover around 50% efficiency at the high end.
The magnetic field in pinch-fusion interacts with the (charged) particles directly, which are emitted at (many many) millions of K. Therefore this theoretical efficiency will be at over 99.99%. In effect in heat-based fusion we loose a lot of that energy by mixing the extremely hot fusion results with the much colder working fluid.
Yeah, not the right words. I intended to say no steam turbine.
Instead of turning energy into heat into turbinable fluid flow in form of steam, they directly use turbinable fluid flow.
The difference is really the lack of steps up to the turbine.
Solar is no doubt the coolest.
Hydro and wind are also very neat, going directly from mechanical to electric via generator, without a steam-turbine.
There is also a very cool fusion-category based on dynamic magnetic fields, that basically form a magnetic piston which expands directly due to the release of charged particles via fusion and then captures the energy from that moving electric field by slowing it back down and initiating the next compression.
A fully electric virtual piston engine in some sense, driven my fusion explosions and capturing straight into electricity.
Feels so much more modern than going highly advanced superconducting billion K fusion-reactor to heat to steam to turbine.
Where do I find a version of half decent, and how do they compare?
test reply 5
Shows as 1 comment for me
to avoid stuffing the beans
No idea what that means
“sockpuppet investigations” page instead so you can look into it further by yourself
I don’t see how that is related. I am by no means a wikipedia expert but reading that article and some of the linked investigations it all seems mundane to me.
tbh this response seems kinda shitty to me. You originally said “[Wikipedia] has a despicable procedure where they dox details of anyone whom they deem as alleged vandals.”. I interpret this as a systemic issue (procedure, they) which happens regularly or always (procedure, anyone). It makes me imagine a wiki page “Vandalism cases on wikipedia” containing a table of cases with date, article, edit, and IP/account, existing for months or years frequented by wikipedia mods and admins.
I interpret your response now as ‘there is info but it is private and part of ongoing investigations’.
If it was a larger issue there should be evidence. After all doxxing is precisely about making something publically available.
My interpretation now is that this is a small thing which either happened in the past or is unknown?
Can you please be specific about what you are referring to? And even without giving evidence, at least clarify what you aledge wikipedia or admins or anyone else did here?
[Wikipedia] has a despicable procedure where they dox details of anyone whom they deem as alleged vandals.
Any details on this?
Smb should be fine. I used it for years on my primary systems (I moved to sshfs when I migrated to linux finally), and it wasn’t ever noticeably less performant than local disks.
Compared to local ntfs partitions anyway, ntfs itself isn’t all that fast in file operations either.
If you are looking at snapshots or media, that is all highly sequential and low file operations anyway. Something like gaming off of a nas via smb does also work, but I think you notice the lag smb has. It might also be iops limitations there.
Large filesizes and highly random fast low-latency reads is a very rare combination to see. I’d think swap files, game assets, browser cache (usually not that large to be fair).
For anything with fewer files and larger changes it always ran at over 100MiB/s for me until I exhausted the disk caches, so essentially the theoretical max accounting for protocol losses.
for music what I use is AIMP. I only hope it can work with wine because I don’t want to run a VM for it
I use that on android. Never knew there were desktop versions, odd that it supports android but not other linux.
Wine is very reliable now, it will almost certainly work out of the box.
Otherwise there are also projects to run android apps on linux, though no doubt at much more effort and lower chance of success than wine.
because I prefer a local player over jellyfin
I used vlc then mpv for years before setting up jellyfin. I could still use them if I wanted to.
For internet access, the largest of files (~30Mbit/s) came up against my upload limit, but locally still played snappily.
Scrubbing through files was as snappy as playing off of my ssd.
I do understand wanting music locally. I sync my music locally to my phone and portable devices too so I’m not dependent on internet connectivity. None of these devices even support hdds however, for my pc I see no reason not to play off of my nas using whatever software I prefer.
I didn’t want to buy him an SSD unnecessarily big […] for the lower lifespan
Larger ssds almost always have higher maximum writes. If you look at very old (128 or 256GB drives from 2010-2015 ish) or very expensive drives you will get into higher quality nand cells, but if you are on a budget you can’t afford the larger ones and the older ones may have 2-3 times the cycles per cell but like a tenth the capacity, so still 1/3rd the total writes.
The current price optimum to my knowledge is 2TB SSDs for ~85USD with TLC up to 1.2PBW, so about 600 cycles. If you plan on a lifetime of 10 years, that is 330GB per day, or 4GB/day/USD. I can’t even find SLC on the market anymore (outside of 150USD 128GB standalone chips), but I have never seen it close to that price per bytes written. (If you try looking for slc ssds, you will find incorrectly tagged tlc ssds, with tlc prices and lifetime. That is because “slc cache” is a common ssd buzzword).
I didn’t want to buy him an SSD unnecessarily big […] for the cost
Another fun thing about HDDs is that they have a minimum price, since they are large bulky chunks of metal that are inherently hard to manufacture and worth their weight in materials.
That lower cutoff seems to be around 50USD, for which you can get 500GB or 2TB at about the same price. 4TB is sold for about 90USD.
In terms of price, ignoring value just going for the cheapest possible storage, there is never a reason to by an HDD below 2TB for ~60USD. A 1TB SSD has the same price as a 1TB HDD, below that SSDs are cheaper than HDDs.
So unless your usecase requires 2TB+, SSDs are a better choice. Or if it needs 1TB+ and also has immensely high rewrite rates.
a few VMs, a couple of snapshots
I have multiple complete disk images of various defunct installs, archived on my nas. That is a prime example for stuff to put into network storage. Even if you use them, loading them up would be comparable in speed to doing it off of an HDD.
Oh yeah absolutely. As mentioned above I myself use spinning rust in my nas.
The difference is decreasing over time, but it’ll be ages before ssds trump hdds in price per TB.
The difference now compared to in the past is that you are looking at 4TB SSDs and 16TB HDDs, not 512GB SSDs and 4TB HDDs, and in my observation the vast majority has no use for that amount of storage currently, while the remainder is willing or even happy to offload the storage onto a separate machine with network access, since the speed doesn’t matter and it’s the type of data you might want to access rarely but from anywhere on any kind of device.
Compare for example phones that are trying to sell you 0.25 or 0.5 TB as a premium feature for hundreds of usd in upmark.
If anyone had use for 2TB of storage, they would instead start at 0.5 and upsell you to 2 and 4 TB.
I myself have 32TB of storage and am constantly asking around friends and family if anyone has large amounts of data they might wanna put somewhere. And there isn’t really anyone.
Even the worst games only use up so many TB, and you don’t really wanna game off of HDD speeds after tasting the light. And if you’d have to copy your game over from your HDD, the time it’d take to redownload from steam is comparable unless your internet is horrifically bad.
My extensive collection of linux ISOs is independent and stable, and I do actually share it with a few via jellyfin, but in all its greatness both in amount and quality it still packs in below 4TB. And if you wanna replicate such a setup you’d wanna do it on a dedicated machine anyway.
If I had to slim down I could fit my entire nas into less than 4TB if I’m being honest with myself, in my defense I built it prior to cost-effective 4TB SSDs. The main benefit for me is not caring about storage. I have auto backups of my main apps on my phone, which copy the entire apk and data directories, daily, and move them to the server. That generates about 10GB per day.
I still haven’t bothered deleting any of those, they have just been accumulating for years. If I ever get close to my storage capacity, before buying another drive I’d first go in and delete the 6TB of duplicate backups of random phone apps dated 2020-2026.
I wrote a paper grouping together info of tons of simulations. And instead of taking out the measurement files containing the relevant values every 10 simulation steps (2.5GB), or the data of all system positions and all measured quantities every 2 steps (~200GB), I copied the entire runtime directory. For 431 simulations, 8.5GB per, totaling 1.8TB.
And then later my entire main folder for that entire project and the program data and config dirs of the simulation software, for another half a TB. I could have probably saved most of that by looking into which files contain what info and doing some most basic sorting. But why bother? Time is cheap but storage is cheaper.
But to go for simply the feeling of swimming in storage capacity, you first need to experience it. Which is why I think noone wants it. And those that do already have a nas or similar setup.
Maybe you see a usecase that would see someone without knowledge or equipment need tons of cheap storage in a single desktop pc?
M.2 nvme uses PCIe lanes. In the last few generations both AMD and intel were quite skimpy with their PCIe lane offering, generally their consumer CPUs have only around 20-40 lanes, with servers getting over 100.
In the default configuration, nvme gets 4 lanes, so usually your average CPU will support 5-10 M.2 nvme SSDs.
However, especially with PCIe 5.0 now common, you can get the speed of 4 PCIe 3.0 lanes in a single 5.0 lane, so you can easily split all your lanes dedicating only a single lane per SSD. In that configuration your average CPU will support 20-40 drives, with only passive adapters and splitters.
Further you can for example actively split out PCIe 5.0 lanes into 4x as many 3.0 lanes, though I have not seen that done much in practice outside of the motherboard, and certainly not cheaply. Your motherboard will however usually split out the lanes into more lower-speed lanes, especially on the lower end with only 20 lanes coming out of the CPU. In practice on even entry-level boards you should count on having over 40 lanes.
As for price, you pay about 30USD for a pcie x16 to 4 M.2 slot passive card, which brings you to 6 M.2 slots on your average motherboard.
If you run up against the slot limit, you will likely be using 4TB drives and paying at the absolute lowest a grand for the bunch. I think 30USD is an acceptable tradeoff for a 20x speedup almost everyone on this situation will be taking.
If you need more than 6 drives, where you would be looking at a pcie sata or sas card previously, you can now get x16 pcie cards that passively split out to 8 M.2 slots, though the price will likely be higher. At these scales you almost certainly go for 8TB SSDs too, bringing you to 6 grand. Looking at pricing I see a raid card for 700usd, which supports passthrough, i.e. can act as just a pcie to M.2 adapter. There are probably cheaper options, but I can’t be bother to find any.
Past that there is an announced PCIe x16 to 16 slot M.2 card, for a tad over 1000usd. That is definitely not a consumer product, hence the price for what is essentially still a glorified PCIe riser.
So if for some reason you want to add tons of drives to your (non-server) system, nvme won’t stop you.
The inputs of the model are full copies of copyrighted data, so the “amount used” is the entirety of the copyrighted work.
If you want to apply current copyright law to the inner working of artificial networks, you run into the problem that it doesn’t work on humans either.
A human remembering copyrighted works, be it memorization or regular memory, similarly is creating a copy of that copyighted work in their brain somewhere.
There is no law criminalizing the knowledge or inspiration a human obtains from consuming media they did not have the rights to consume. (In many places it isn’t even illegal to aquire and consume media you don’t have rights to, only to provide it to others without those rights)
Criminalizing knowledge, or brains containing knowledge, can’t possibly be a good idea, and I think neural nets are too close to the function of the brain to apply current regulation to one but not the other. You would at minimum need laws explicitly specifying to only apply to digital neural nets or something similar, and it apears this page is trying to work in existing regulation. (If we do create law only applying to digital neural nets, and we ever create intelligent enough ai it could deservedly be called a person, then I’m sure that ai wouldn’t be greatly happy about weird discriminatory regulation applying to only its brain but not that of all the other people on this planet.)
A neural net is working too similarly to the human brain to call the neural net a copy but the human brain “learning, memorization, inspiration”. If you wanna avoid criminalizing thoughts, I don’t see a way to make the arguments this website makes.
The first day of the naturally occuring spring, occuring naturally at a set number of days after the solstice.
You can recognize it from the skybox changing color.