

Oh, you think Canadians aren’t going to get in on tariff evasion? They 100% will.
The tremendous irony is America was founded to evade massive tariffs, sorta… and now we’re doing it to our self.
Oh, you think Canadians aren’t going to get in on tariff evasion? They 100% will.
The tremendous irony is America was founded to evade massive tariffs, sorta… and now we’re doing it to our self.
Only barely, with 4 known Senate Trump skeptics:
Sens. Susan Collins (Maine), Mitch McConnell (Ky.) Lisa Murkowski (Alaska) and Rand Paul (Ky.)…
https://www.axios.com/2025/04/02/senate-repeal-trump-tariffs-canada
You can generally toggle LLM “grounding” features, aka inserting web searches into their context.
Modern LLMs have a information “cutoff” of a few months ago, at the latest, so the base models will have zero awareness of this formula.
TBH it’s probably human written.
I used to write small articles for a tech news outlet on the side (HardOCP), and the entire site went under well before the AI boom because no one can compete with conveyer belts of of thoughtless SEO garbage, especially when Google promotes it.
Point being, this was a problem well before the rise of LLMs.
In this case, it’s as simple as “type it into ChatGPT, like the Reddit users did” :/
That they didn’t try to replicate it.
How about the outlet checks and finds out?
I did, and I couldn’t get low-temperature Gemini or a local LLM to replicate it, and not all the tariffs seem to be based on the trade deficit ratio, though some suspiciously are.
Sorry, but this is a button of mine, outlets that ask stupidly easy to verify questions but dont even try. No, just cite people on Reddit and Twitter…
True! Models not trained on a specific language are generally bad at that language.
However, there are some exceptions, like a Japanese tune of Qwen 32B which dramatically enhances it Japanese, but the training has to be pretty extensive.
And even that aside… the effect is still there. The point it to illustrate that LLMs are sort of “language independent” internally, like you said.
It’s a metaphor.
They’re translating the input tokens to intent in the model’s middle layers, which is a bit more precise.
I use local instances of Aya 32B (and sometimes Deepseek, Qwen, LG Exaone, Japanese finetunes, others depending on the language) to translate stuff, and it is quite different than Google Translate or any machine translation you find online. They get the “meaning” of text instead of transcribing it robotically like Google, and are actually pretty loose with interpretation.
It has soul… sometimes too much. That’s the problem: It’s great for personal use where it can ocassionally be wrong or flowery, but not good enough for publishing and selling, as the reader isn’t necessarily cognisant of errors.
In other words, AI translation should be a tool the reader understands how to use, not something to save greedy publishers a buck.
EDIT: Also, if you train an LLM for some job/concept in pure Chinese, a surprising amount of that new ability will work in English, as if the LLM abstracts language internally. Hence they really (sorta) do a “meaning” translation rather than a strict definitional one… Even when they shouldn’t.
Another thing you can do is translate with one local LLM, then load another for a reflection/correction check. This is another point for “open” and local inference, as corporate AI goes for cheapness, and generally tries to restrict you from competitors.
My impression of Newsom is that he’s slippery as a snake.
I dunno why folks around here want him to run for president.
It is for warning in this format. Mind as well post it on a private chat.
Pfft, just watch, it will be an excuse to fire people in droves/cut wages while executives either get raises for “bold” responses or golden parachutes. Day traders will adore the volatility at the expense of retirement funds. Under duress, companies will merge like crazy since the only barrier now seems to be complying with government ethos. Reduced capital gains will juice the market. The masses will shoulder tariffs.
The response, so far, seems to be licking boots, unfortunately.
Irony is oldschool MAGA, like the kind bred from 4chan, is accelerationist too, but it doesn’t seem as prominent anymore.
Yeah, I never thought I’d say it, but I hope the entire world shuns America and fucks us over.
I doubt even that will fix us, though. We are past some kind of disinformation event horizon, where most of us will never see much reality outside their local life, including leaders who are drinking the kool-aid. Even Idiocracy is kinda idyllic because at least Kamancho was relatively open minded.
To who? Republican voters who will never see this in their feed, or already hate scientists as elites out to get them? Government leaders who openly hate them, either for personal gain or real pseudoscientific beliefs? Opposition who can’t do anything about it, and might not if they could anyway? Profit-obsessed news outlets who would never feature something as boring as this unless it’s already something their audience wants to hear?
It’s too late.
I swear, organizations like this are communicating like it’s 1950 as the entire country sleepwalks into an information dystopia. They need to be loud, sensationalist if not outright propagandist, get on podcasts and Fox, game commercial social media and otherwise shun it if they want to change any minds.
Oh he does, he just doesn’t care.
The line goes up, then you sell out to some anticompetitive behemoth to eat before it implodes. That’s how it works.
Many (American) folks of mine, even more conservative ones, tend to tune out familiar news sources because they’re so bad. Others are really glued to Facebook or whatever their feed of choice is.
TBH I think America (on average) just lives in a stronger information dystopia than Europe. People here don’t connect social security cuts to them, or even know about Trump’s/Musk’s statements on it.
Morale of the story… please ban Facebook, X, really most engagement-driven social media as fast as you can. Or risk turning into… us.
The Switch 2 chip is effectively older than the now aging (but fantastic for the price/size) Van Gogh chip in the Steam Deck. It shouldn’t be expensive to make.