Show newer

@MurkyConsequences @rechelon microeconomics is hard-core computer science. macroeconomics is astrology.

@regehr @dev incidentally out-of-bounds writes into stack-allocated std::array is a great way to crash GDB and leave no trace in any of the standard *SAN tools

@regehr @steve first comment “my man trained on K2 to climb Washington” 🤣

uhhh so there is such a thing as GPT Killing Words and there are multiple that specifically, solely come from Twitch Plays Pokémon lesswrong.com/posts/aPeJE8bSo6

@simon I’m honestly surprised I haven’t seen any Mastodon instances named with elephant seal puns yet

But Black folk are disproportionately stopped, searched, charged, and arrested, tried, and convicted for drugs.

On bail: If you are arrested while poor and Black, you may be in jail *over a year* before trial. 600K+ folk in jail today are pre-trial. 😢

You will be locked up with folks my size. You will very likely be assaulted.

Even if you are innocent... An officer will offer you a way out of your year of hell: just admit your "guilt" and get out today!

Show thread

@tomw

I suspect that writing certifiably produced before 2021 will be considered differently than writing produced afterwards, at least for some time.

Like low background steel for writing. 🤷🏿‍♂️

en.m.wikipedia.org/wiki/Low-ba

@rechelon @mutual_ayyde @KevinCarson1 that’s the mini/open-source version, I think. The “real” OpenAI version would probably get mad at you for asking for a real person, but if it did let you ask you would get good results.

@alexwild That's bad Massachusetts drivers. Bad Rhode Island drivers are like "what lane am I in? where am I going? what year is it?" 🙃

@mmasnick @mattsheffield (this is one additional argument in favor of using a generic backing store like IPFS)

@mattsheffield @mmasnick what’s really needed for censorship resistance is to allow broadcasting encrypted content with decryption keys discoverable via other channels (similar to a distributed Mega) so that the hosting services can’t be liable for the content, since they have no way of distinguishing it from random bits

@mattsheffield @mmasnick I’ve spent a while thinking about how to implement a censorship resistant social network, and even with something like IPFS as the backing data store you still run into the problem of now your users need pinning services or they have to have an always-on machine somewhere to do it themself

@mattsheffield @mmasnick I think it’s because so many modern internet users are stuck on mobile or other ISPs that are hostile to P2P protocols - plus discovery has always been hard with P2P, which is why the point of censorship for torrents were always the trackers. I think the way to do this “right” is to implement the relay level as P2P (a nice architecture would be to try and self-organize the network into an expander graph), and let users manage relay lists

@mutual_ayyde also - there are a lot of domains still open where there are no off-the-shelf models ready to even process the data types you need to process, and that’s where I’ll take a small team of smart folks with a vision over a directionless tech giant any day. I fully intend on changing the world with what we’re building at Geopipe.

@mutual_ayyde *but* once the open source models are out there, anyone can fine-tune them to specific tasks on commodity hardware, and if they were halfway decent to start with, frankly most likely beat whatever general purpose thing the tech giants have on the specific task you wanted, which is where the “a good application and use case is the real moat” comes in (see eg the AI Profile Pic generator that went viral a few weeks ago)

@mutual_ayyde big changes in network architecture are even less feasible because the weights likely can’t be transferred over directly (although you can salvage certain layers and freeze them while retraining other components of the network around them)

@mutual_ayyde So a lot of the sorts of things you would want to fix with an already released neural network aren’t straightforwardly patchable because of that black box behavior. For example if you train with certain normalizations or regularizations that are later found to destroy information that the network needs for better performance … that information has already been lost in the trained weights and fixing it will only help further training.

@rechelon @KevinCarson1 @mutual_ayyde having tried both DALL-E 2 and Stable Diffusion extensively, it’s enormously clear the difference in quality that a bigger budget can make. That said, we’re not talking about anything remotely approaching AGI, and for almost all real world use-cases you’re going to want to fine tune with pretty tractable computational resources on a much smaller scale dataset

Show older
Mastodon

General topic personal server.