Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Wow, I wonder what happens now when students all submit super well done statistical analyses for their homework... interesting times ahead.
    Best wishes

    (Stata 16.1 MP)

    Comment


    • #17
      I've asked it a bunch of questions and it makes a lot of factual errors. It will provide similar language for similarly worded questions. I think these artifacts will be the "tell." I suspect this will not be unlike the migration of "regression by hand" to statistical software. It will make writing papers easier for sure, but it will never replace a clever question.

      Comment


      • #18
        Nooooo... ChatGPT has come to take our jobs !

        Comment


        • #19
          Some interesting issues concering the (mis)use of AI at large are covered here (https://pubmed.ncbi.nlm.nih.gov/36549229/).
          Kind regards,
          Carlo
          (Stata 18.0 SE)

          Comment


          • #20
            That's cool. ChatGPT is a co-author of that editorial.

            Comment


            • #21
              George:
              yes, I was impressed too!
              Kind regards,
              Carlo
              (Stata 18.0 SE)

              Comment


              • #22
                I'm really enjoying watching all of the ChatGPT content coming out in the programing and medical communities. That said, I wonder how one might go about systematically evaluating the quality of the programming advice. To what extent does the model get things wrong, and are there systematic conditions under which it generates incorrect responses?

                Comment


                • #23
                  I would imagine so. It really isn't a topic for me to study, but, the fancy-schmansy Chat bot is only as good as it's been trained to be. It isn't sentient, in the sense that it can actually think in the way we do. My best guess though would be to give it a set of more complicated tasks than "how to program OLS into Stata" or something similar. Something with a definite answer that only humans could do. In fact, I think programs like scul are great examples of this!

                  When writing scul, it wasn't just using the LASSO, I had to devise numerous unit tests, subroutines, refactoring of code, deciding on case-specific commands to use, and a host of other things you can only do as a sentient being. ChatGPT no doubt knows more code than me, but I, we, have the advantage that it doesn't have, in that we have sentience and can learn. I think there are plenty of ways we could test this (any number of ways, in fact), but it really isn't my place to do so.

                  Comment


                  • #24
                    Originally posted by Carlo Lazzaro View Post
                    Some interesting issues concering the (mis)use of AI at large are covered here (https://pubmed.ncbi.nlm.nih.gov/36549229/).
                    See also: https://www.theguardian.com/science/...thor-on-papers
                    --
                    Bruce Weaver
                    Email: [email protected]
                    Web: http://sites.google.com/a/lakeheadu.ca/bweaver/
                    Version: Stata/MP 18.0 (Windows)

                    Comment


                    • #25
                      I would be very happy if people found that some simple queries could be answered this way rather than through Statalist, or any other technical forum. AI no more undermines, or competes with, technical forums than does (say) people solving their problems by reading the manual, or watching videos, or getting support from individuals privately.

                      I am just curious about whether we will start hearing stories of people doing very odd things that are quite wrong or bizarre -- and reviewers or examiners then hearing or slowly unearthing the fact that the underlying code came from this source! The consequences could be dire for individuals, whether failing a degree, or damaging their reputation or prospects by being associated with unacceptably poor work.

                      Comment


                      • #26
                        Originally posted by George Ford View Post
                        I've asked it a bunch of questions and it makes a lot of factual errors.
                        It seems to me it even invents unexisting options:
                        chatgpt_one.docxchatgpt_two.docx
                        Attached Files
                        Last edited by Federico Tedeschi; 07 Mar 2023, 03:11.

                        Comment


                        • #27
                          See https://imgur.com/7PBId6f where it is recommended to run use with the mixedcase option. Seems that StataCorp never got round to implementing that; in fact so far as I can see no such option even makes sense in principle for use.

                          Posts I've seen elsewhere run the complete spectrum from ChatGPT makes forums totally redundant -- or teaching Stata quite unnecessary as students can just use it to find their code -- to it's completely useless at Stata, with lots of confident but contradictory explanations as to why. One runs that it's because Statalist is a mess, so there you go folks, it's all your fault, and mine too.

                          I've got to be boring and say that the truth is somewhere in between, depending on what you ask.

                          On a related note, I find the position of my own first university delicious: students can certainly use ChatGPT, just not for coursework or examinations!

                          Comment


                          • #28
                            I agree that ChatGPT is great for beginners' questions like me. I have been using it daily to answer simple questions about Stata codes, but it won't give codes for some issues.
                            Hope with time it gets smarter and provides more useful answers to complicated issues.

                            Comment


                            • #29
                              The problem is that, when ChatGPT doesn't know the answer, it usually invents it (be it a publication or a software command option). And, if you make it notice the error, it usually produces similar wrong answers until it either returns to the starting one, or blames you to not have installed the proper package/library. But yes, it seems reasonable that, the simpler the question (or, better saying, the easier to find a correct answer to that question online), the higher the likelihood to get a correct and useful answer.

                              Comment


                              • #30
                                ChatGPT is a very large, very sophisticated next-token prediction algorithm. It's a little ironic that the model - trained on a corpus of data from the internet - doesn't find "oh you're right, my mistake" to be a very likely path to go down.

                                Comment

                                Working...
                                X