Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • George Ford
    started a topic ChatGPT - This is interesting

    ChatGPT - This is interesting

    I'm not sure how complex a code it could produce, but here's a test of some simple tasks.


    Click image for larger version

Name:	statadid001.jpg
Views:	1
Size:	178.7 KB
ID:	1698531

    Click image for larger version

Name:	statadid002.jpg
Views:	1
Size:	66.8 KB
ID:	1698532

    Click image for larger version

Name:	statadid003.jpg
Views:	1
Size:	213.7 KB
ID:	1698533


  • Nick Cox
    replied
    No one replied on Twitter to https://twitter.com/mjpost/status/1633818755930247170 that I can see but FWIW the solution is quite wrong.

    If a user picked up that they should look at egen that would help with a partial solution.

    Leave a comment:


  • George Ford
    replied
    HTML Code:
    https://thenextweb.com/news/the-first-iphone-what-the-critics-said-10-years-ago

    Leave a comment:


  • Sebastian Kripfganz
    replied
    To be fair, there appears to be some learning. When confronted with the same question again today as in post #32, ChatGPT now provides the correct answer.

    Leave a comment:


  • Nick Cox
    replied
    In https://stackoverflow.com/questions/...sonal-variable the question was evidently to explain the seasonal difference operator and the answer was

    Generate a quarterly seasonality variable gen time = _n gen S6.quarterly = sin((2*_pi()*time)/6)
    That's just a concoction of wrong and irrelevant. It would take quite a long post to disentangle all the confusion, but the original question can be answered directly.

    There is here a classic selection problem, naturally. People who like Statalist and are sceptical of this beast will love to add posts like this with little horror stories, Satisfied users are in some other cell of the table.

    As I said earlier, the time the proverbial hits the proverbial will be when some misguided researcher produces garbage at an important life stage (e.g. near thesis completion) that -- whether discovered or not -- all hinges on taking ChatGPT the wrong way. Of course, that could happen with solutions posted anywhere. But there is a strong tendency to correct others' mistakes, often reported as shocking or obnoxious, but what do people prefer? Wrong answers sugar-coated?

    Quis custodiet ipsos custodes? Or, on a somewhat free translation, which AI agent watches the AI agents?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    I am actually mostly disappointed by ChatGPT. It has the strong tendancy to present alternative facts. The command name for the ADF test in Stata is certainly not adf (as in the initial post of this thread), but dfuller.

    Here is a another stats example of ChatGPT stupidity (not Stata related):
    Click image for larger version

Name:	ChatGPT.jpg
Views:	1
Size:	62.9 KB
ID:	1705020

    Leave a comment:


  • Daniel Schaefer
    replied
    What we really need is a way to quantify and evaluate our doubt about the accuracy of the model output.

    Leave a comment:


  • Daniel Schaefer
    replied
    ChatGPT is a very large, very sophisticated next-token prediction algorithm. It's a little ironic that the model - trained on a corpus of data from the internet - doesn't find "oh you're right, my mistake" to be a very likely path to go down.

    Leave a comment:


  • Federico Tedeschi
    replied
    The problem is that, when ChatGPT doesn't know the answer, it usually invents it (be it a publication or a software command option). And, if you make it notice the error, it usually produces similar wrong answers until it either returns to the starting one, or blames you to not have installed the proper package/library. But yes, it seems reasonable that, the simpler the question (or, better saying, the easier to find a correct answer to that question online), the higher the likelihood to get a correct and useful answer.

    Leave a comment:


  • Yusra Noorwali
    replied
    I agree that ChatGPT is great for beginners' questions like me. I have been using it daily to answer simple questions about Stata codes, but it won't give codes for some issues.
    Hope with time it gets smarter and provides more useful answers to complicated issues.

    Leave a comment:


  • Nick Cox
    replied
    See https://imgur.com/7PBId6f where it is recommended to run use with the mixedcase option. Seems that StataCorp never got round to implementing that; in fact so far as I can see no such option even makes sense in principle for use.

    Posts I've seen elsewhere run the complete spectrum from ChatGPT makes forums totally redundant -- or teaching Stata quite unnecessary as students can just use it to find their code -- to it's completely useless at Stata, with lots of confident but contradictory explanations as to why. One runs that it's because Statalist is a mess, so there you go folks, it's all your fault, and mine too.

    I've got to be boring and say that the truth is somewhere in between, depending on what you ask.

    On a related note, I find the position of my own first university delicious: students can certainly use ChatGPT, just not for coursework or examinations!

    Leave a comment:


  • Federico Tedeschi
    replied
    Originally posted by George Ford View Post
    I've asked it a bunch of questions and it makes a lot of factual errors.
    It seems to me it even invents unexisting options:
    chatgpt_one.docxchatgpt_two.docx
    Attached Files
    Last edited by Federico Tedeschi; 07 Mar 2023, 04:11.

    Leave a comment:


  • Nick Cox
    replied
    I would be very happy if people found that some simple queries could be answered this way rather than through Statalist, or any other technical forum. AI no more undermines, or competes with, technical forums than does (say) people solving their problems by reading the manual, or watching videos, or getting support from individuals privately.

    I am just curious about whether we will start hearing stories of people doing very odd things that are quite wrong or bizarre -- and reviewers or examiners then hearing or slowly unearthing the fact that the underlying code came from this source! The consequences could be dire for individuals, whether failing a degree, or damaging their reputation or prospects by being associated with unacceptably poor work.

    Leave a comment:


  • Bruce Weaver
    replied
    Originally posted by Carlo Lazzaro View Post
    Some interesting issues concering the (mis)use of AI at large are covered here (https://pubmed.ncbi.nlm.nih.gov/36549229/).
    See also: https://www.theguardian.com/science/...thor-on-papers

    Leave a comment:


  • Jared Greathouse
    replied
    I would imagine so. It really isn't a topic for me to study, but, the fancy-schmansy Chat bot is only as good as it's been trained to be. It isn't sentient, in the sense that it can actually think in the way we do. My best guess though would be to give it a set of more complicated tasks than "how to program OLS into Stata" or something similar. Something with a definite answer that only humans could do. In fact, I think programs like scul are great examples of this!

    When writing scul, it wasn't just using the LASSO, I had to devise numerous unit tests, subroutines, refactoring of code, deciding on case-specific commands to use, and a host of other things you can only do as a sentient being. ChatGPT no doubt knows more code than me, but I, we, have the advantage that it doesn't have, in that we have sentience and can learn. I think there are plenty of ways we could test this (any number of ways, in fact), but it really isn't my place to do so.

    Leave a comment:

Working...
X