

Do you not see any value in engaging with views you don’t personally agree with? I don’t think agreeing with it is a good barometer for whether it’s post-worthy
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.
(header photo by Brian Maffitt)
Do you not see any value in engaging with views you don’t personally agree with? I don’t think agreeing with it is a good barometer for whether it’s post-worthy
Fwiw it looks like Cutipol is the brand and that Horne is just the retailer
I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.
Misc points:
So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.
Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
For platforms that don’t accept those types of edits, the link OP tried to submit: https://www.theverge.com/news/690815/bill-gates-linus-torvalds-meeting-photo
That video of them interviewing people on the street with it was pretty fun!
the atrocious webp format
I continue to be confused by the level of widespread hate WebP still gets. It’s old enough to be widely (albeit not universally) supported in software like web browsers, but new enough to provide similar-or-better (usually better) lossless compression than PNG (21,578 bytes for the original image) and typically better lossy compression than JPEG at comparable perceived quality, especially for the types of images typically shared on the internet (rather than say, images saved directly from a DLSR camera). It’s why servers bother to re-encode JPEG images to WebP for delivery - they wouldn’t bother wasting the compute time to re-compress if it wasn’t generally worth doing.
I can understand it if we were, say, 10-15 years ago when the format was still not super widely supported yet, but that’s basically where we are with JPEG XL and AVIF support right now too. If one of these two had exactly the level of support that WebP does right now then yes, of course we should probably use one of them instead - but we’re not there yet. Until we are, WebP often has the best compromise between compatibility and compression efficiency as far as image formats go, and that’s why a lot of sites do this re-compression thing using WebP. I gave some examples using digital art (one of the things I was compressing a lot at the time) a year ago in a related discussion: https://lemmy.world/post/6665251/4462007
A news website local to me recently-ish started choosing to deliver AVIF-compressed (or probably re-compressed) images the same way a lot of sites currently do it for WebP because my browser supports AVIF, so at least we are starting to see a token amount of uptake on the next-gen formats in the wild.
Thanks for sharing!
Fair enough - glad you’ve found it helpful (Y)
You’re making assumptions about how they work based on your intuition - luckily we don’t need to do much guesswork about how the sorts are actually implemented because we can just look at the code to check:
CREATE FUNCTION r.scaled_rank (score numeric, published timestamp with time zone, interactions_month numeric)
RETURNS double precision
LANGUAGE sql
IMMUTABLE PARALLEL SAFE
-- Add 2 to avoid divide by zero errors
-- Default for score = 1, active users = 1, and now, is (0.1728 / log(2 + 1)) = 0.3621
-- There may need to be a scale factor multiplied to interactions_month, to make
-- the log curve less pronounced. This can be tuned in the future.
RETURN (
r.hot_rank (score, published) / log(2 + interactions_month)
);
And since it relies on the hot_rank function:
CREATE FUNCTION r.hot_rank (score numeric, published timestamp with time zone)
RETURNS double precision
LANGUAGE sql
IMMUTABLE PARALLEL SAFE RETURN
-- after a week, it will default to 0.
CASE WHEN (
now() - published) > '0 days'
AND (
now() - published) < '7 days' THEN
-- Use greatest(2,score), so that the hot_rank will be positive and not ignored.
log (
greatest (2, score + 2)) / power (((EXTRACT(EPOCH FROM (now() - published)) / 3600) + 2), 1.8)
ELSE
-- if the post is from the future, set hot score to 0. otherwise you can game the post to
-- always be on top even with only 1 vote by setting it to the future
0.0
END;
So if there’s no further changes made elsewhere in the code (which may not be true!), it appears that hot
has no negative weighting for votes <2 because it uses the max value out of 2
and score + 2
in its calculation. If correct, those posts you’re pointing out are essentially being ranked as if their voting score was 2, which I hope helps to explain things.
edit: while looking for the function someone else beat me to it and it looks like possibly the hot_rank
function I posted may or may not be the current version but hopefully you get the idea regardless!
“Hot” is a mix of recency and votes. The posts in your example score low on votes but very high on recency (<1 hour ago) and extremely high on the size scaling because that community ( !hp_fanfiction@literature.cafe ) is tiny with only two subscribers.
You may consider Scaled to be a more appropriate sorting option for when you’re viewing the communities that you’ve subscribed to, rather than the firehose of /all
Memory connected via the pci bus to the CPU, would be too slow for application use like that.
The experimental results presented in this paper demonstrate that Micron’s CZ122 CXL memory modules used in software level ratio based weighted interleave configuration significantly enhance memory bandwidth for HPC and AI workloads when used on systems with Intel’s 6th Generation Xeon processors.
Found via Wendell: YouTube
edit: typo
Using ram doesn’t add anything.
It would improve access latency vs flash though, despite less difference in raw bandwidth
Nice to see he took it in stride given how… aggressive the post was about him lol
Please don’t give the US any ideas ;_;
Presumably the member states can decide to interpret it however they’d like, but for whatever it’s worth I’m just paraphrasing what political scientist William Spaniel (…who I thought would have had a Wikipedia page by now) has said on the topic of Article 5 (though the context wasn’t the US invading Greenland lol)
Additionally, it’s helpful to know the specific language used in Article 5:
Article 5
“The Parties agree that an armed attack against one or more of them in Europe or North America shall be considered an attack against them all and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defence recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area.
Any such armed attack and all measures taken as a result thereof shall immediately be reported to the Security Council. Such measures shall be terminated when the Security Council has taken the measures necessary to restore and maintain international peace and security.” (emphasis added)
Article 5 doesn’t actually oblige NATO members to defend anything by force, it obliges NATO members to decide what actions are “deemed necessary” and then to undertake those actions. If a NATO member gets invaded, everyone could – in theory – write a sternly worded letter and call it a day (though I doubt that would be the actual response). As you/others have more or less said, the actual action chosen would largely be the result of political will.
I don’t know how well this works for Macs, but is a multi-boot environment a possibility? You could have a separate OS set up for a group of tasks which you boot into when you need to do that. It seems a bit clunky compared to e.g., virtual desktops or similar though.
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:
The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.
NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.
Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).
I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?
Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.
There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?
obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”