Of all the super nuts things I have seen with #ChatGPT, this is the most superly nuts-est, and I am seriously interested in what others think is going on here.
ChatGPT claims to have run my student @lremes's #distributed #fuzzer. It claims to have *found a specific bug* in libpng, which we know is real. And it *suggested stuff to add to his README*.
The crazy thing about the bug it claims to have found is that this is the *same* bug Luciano found by actually running the fuzzer. That bug *is* in a CVE, but there is not anything up on the web indicating that *this* fuzzer can find *this* bug. ChatGPT even produces a nice summary of the bug (probably taken from the CVE).
So what's probably going on here? Did it actually run this fuzzer, interpret the crashes it found, and successfully connect them to a CVE? Seems amazing if true, but highly unlikely. Or did it find some other way to (correctly) guess what bug would be found? More plausible, but still pretty wild.
And it clearly did actually go through the github repo, which has only been online a few weeks, since it suggested expanding the README with stuff that is only in the library.
This is wild.
@regehr @eeide @snagy @gannimo @moyix
Yeah, I think that's right. But, in playing around with it, it has *also* told me that it can't fetch stuff that wasn't in its training set, which is supposedly from last year, but it clearly did that here since this code wasn't even written until a few weeks ago.
So sure, I think that "it actually ran the software" is highly, highly unlikely, but it clearly it does some stuff that it will at least tell you that it can't do.
@ricci @snagy @regehr @eeide @moyix
Agreed, it's highly unlikely (I would even dare say impossible) that it ran the code. My guess is that it indexed some emails to the mailing list where the CVE was assigned/bug was disclosed and then used this background information to bullshit its way to the answer you got.
When I played with it, I personally came to the conclusion that it's amazing at bullshitting it's way around tough topics to make it appear semi-plausible (but often hiding subtle issues)
@elfprince13 @regehr @gannimo @eeide @moyix @ricci
Could be a really cool breakthrough for the physically impaired (e.g., ALS patients)