ChatGPT

Formatting didn’t work. The system eliminated the quotations so the
answer reads like, well, something that ChatGPT might spin out if it
were less capable than it is.

The request for sources reveals a lot about ChatGPT.
In response to the request, it does not really answer, but it creates a reply that “sounds like” a reply to a request for sources.
In many ways it imitates the mind of a piano salesman.

1 Like

Done.
I asked your original question and got the same response. Then I replied: “no,” followed by the sentence you asked me to write (I copy-pasted it).

This is chatgpt response:

======
I apologize for the incorrect information provided earlier. You are correct that red brass is often used for the bass strings in harpsichords. Red brass, which is an alloy of copper and zinc, is chosen for its specific properties that help compensate for the shortening of the scaling in the bass section.

In the bass range of a harpsichord, the scaling (the length of the vibrating portion of the string) becomes shorter, which can result in a decrease in volume and a change in tonal quality. To address this, red brass strings are employed because they have a larger mass compared to other materials like steel or iron. The increased mass helps to maintain sufficient tension and sustain in the lower register, compensating for the shorter scaling length and enhancing the overall sound quality.

The use of different materials in harpsichord stringing allows for the customization of the instrument’s tonal characteristics, compensating for the scaling variations across the keyboard and ensuring a balanced and expressive sound throughout its range.

======

It basically re-worded what I wrote. So I decided to suggest a plain wrong information, in the form of a question:

Don’t you think that iron strings are used in harpsichords just because the harpsichord makers disn’t want waste precious brass?

I got this response, which you all can evaluate:

=======

While it is true that iron strings have been used in some historical harpsichords, the choice of string material in harpsichord construction is not solely driven by the desire to avoid wasting brass. The selection of string materials in harpsichords is a result of various factors, including historical practices, tonal preferences, and practical considerations.

In some historical harpsichords, iron strings were indeed used in the bass section. Iron was a less expensive material compared to brass, which may have influenced its use. However, it is important to note that iron strings produce a different tonal quality compared to brass strings. Iron strings tend to have a darker and more subdued sound, which can be desirable for certain musical styles and interpretations.

Moreover, the choice of string material in harpsichords is also influenced by considerations such as tension, plucking mechanism, and overall construction. Different materials have different tensile strengths, elasticity, and density, which affect the instrument’s overall sound and playability. Harpsichord makers carefully select string materials and gauges to achieve the desired tonal characteristics and optimize the instrument’s performance.

In summary, while cost considerations may have played a role in the use of iron strings in some historical harpsichords, the choice of string material is a complex decision that involves various factors, including tonal qualities, historical practices, and practical considerations related to the instrument’s construction and performance.

No comment.

You’ll get the same responses, however, as chatgpt’s data base is this: (taken from Where Does ChatGPT Get Its Data From? - Tooabstractive but you’ll find the same info everywhere)

Where does ChatGPT get its data from?

ChatGPT, is trained on a large dataset of text data which comes from several sources, such as books, articles, Wikipedia and scientific journals, code repositories and social media posts.

The text data is carefully curated to ensure that it represents a diverse range of topics and styles of writing.

A word of warning!

ChatGPT is not connected to the internet, and sometimes can produce incorrect answers or texts.

===

On wikipedia, yes. And books (which books?). And scientific journals (which ones? “Nature” or “predatory journals”?).
And on social media posts. Yes, chatgpt can take some of its informations from facebook. Oh my.

In the end, we should use it for what it’s worth. It can shorten the programming and coding time, it has been demonstrated. Its code snippets will not work but can be slightly modified to work fine. And can do internet researches based on human language.
It can’t give you any reliable response in any humanistic or scientific field. I mean, the response can be right bit it can be wrong, and you don’t know.

Sorry for the very long post.

Dom

I have worked in software development all my life. I have not worked in the field of AI but I have followed the developments since the very early days. ChatGPT is a large language model. It’s cleverly programmed, and very persuasive as it has mastered prefect grammar and sentence formation. Humans have a weakness for seeing things that are not there such as camels in clouds and intelligence where there is none, as we are higly optimised pattern matching machines. ChatGPT has nothing to do with AI although it is touted as such. All it can do is process great scads of text from the internet mostly, and play with that. It is susceptible to some user feedback which is why they let you use if for free, but it’s not capable of learning or perception or thought of any kind. The over confidence of the answers and the made up BS (ahem) may be dangerously misleading for people whose education is less and whose critical faculties may not be sharp. A couple of months ago I posted here its answer about tuning - complete garbage.

As a student of computer science I am of the technical and deep philosophical view that we will achieve Strong AI, now known as AGI, artificial general intelligence. This implies consciousness, self awareness and intentionality. [David Chalmers makes a formidably and difficult to refute case for the philosophical possibility of the creation of Strong AI in the paper on Dancing Qualia.] From where we are now I’d estimate (along with several other people in the field) that we are at least 300 years away from approaching such a thing. It’s not imminent, and not a present danger.

ChatGPT is not approaching AGI nor will it. It’s a limited domain model. Once they have finished feeding it the whole internet and any remaining digitised text, it can’t go any further. It’s a dead end in point of fact.

It’s worth listening to Chomsky on ChatGPT who points out amongst a complete condemnation of it that there is no useful or new science in it.

I predict it is a passing fad, like skateboards or roller blades etc etc. When people realise it does not know anything and that it can be wrong in ways that may be hard to verify they will lose interest. I give it about a year. The fact that Microsoft put $10 billion into OpenAI signifies nothing, as that is small change to Microsoft and they can afford to take a hit if it never takes off properly.

As an aside, but related, it’s worth pointing out that IBM spent hundreds of millions developing Watson, which was fed large amounts of the internet, so that it could win Jeopardy, as a proof of AI. It did win Jeopardy. Impressive. But then they wanted to purpose it to do medical diagnosis faster, more accurately than doctors and specialists. They has some limited success in very restricted domains. The Watson division has since folded, as unprofitable and technically unsuccessful. Yet at the time there was the same hype about it as we are seeing now with ChatGPT. What do you hear about it now?

I find ChatGPT mildly entertaining. It can write quite good sonnets though! :slight_smile:

Here’s Grady Booch’s view on why we won’t be seeing AGI for a long time:

Thank you Claudio for posting these very interesting examples of ChatGPT, and your analyses of the responses you got.

To me this begs the question: WHY do we believe what we believe? ChatGPT looks at presumably vast amounts of text-based information (and I have read that it can also pick up images it finds) and writes a plausible response to questions.

But so often in life, that’s not how we arrive at our beliefs, or how we answer questions.

I’m sure everyone has experienced a situation where how something looks on paper does not match the reality we experience. For example, after five years of rather invasive medical treatment which caused immuno-suppression, my immunity (in terms of blood counts etc) looks fine on paper - but in reality, I am still somewhat immuno-suppressed. My doctor told me, ‘there are aspects of immunity which we cannot test’.

We are also pre-conditioned by our parents, our community, our teachers etc to weigh up pieces of information and decide which ones should be retained, which should be questioned, and which should be discarded.

And there are other factors which influence what we believe to be true - such as personal experience, and intuition.

So I find ChatGPT extremely worrying. We live in an age where we can find answers to many questions in an instant - but those answers may be wholly correct, partially correct, wholly incorrect, or simply irrelevant. Sometimes we cannot tell the difference. The ‘scientific’ nature of ChatGPT, and its ability to write well in standard English, makes its answers seem more plausible than they actually are.

1 Like

This is exactly the business model of Facebook – steal someone’s research and redistribute it without attribution or credit to the one who did the hard work. As you say, AI should learn from itself, and not profit (and I mean that financially, because that is the bottom line of this project) from the work of others. Dont be fooled: it is not AI that is going to take over this aspect of the world and become hyper-rich; but the other human beings that are behind the veil!

David

This is a worthy topic for us because I suppose there is going to be a surge of misinformed music students coming up, for one thing! Students relying on WIkipedia was bad enough, and now, worse. At least Wikipedia has some built in community error correction in principle, but in practice it works very poorly with people taking ownership of pages - not allowed - and reverting edits they don’t like or disagree with, even if correct.

Do a search in this site for “vallotti chatgpt” and you will find a session I had with it asking for temperament information. Not exactly a success. [I can’t just put a link because Discourse messes up the formatting.]

1 Like

On ChatGPT failing miserably about Vallotti:

AFAIK the first modern day “recommendation” of Vallotti in Barbour’s temperament book of 1951, where he described it as an “ideal form” of circular temperament.

In my opinion Vallotti had great diffusion, starting in the UK, with the seminal 1977 article on Early Music of Mark Lindley on unequal temperaments. Although quite basic, this paper was very influential indeed.

Some musicians have attributed to my own works the popularity of Vallotti in the last decades of the 20th century. I disagree: I do not believe my works have been so influential in this respect.

The very convolute “real” history of Vallotti’s temperamentwas first explained in the works of Patrizio Barbieri, mainly in his book “Accordatura e Temperamento nell’Illuminismo Veneto”.

From Barbieri’s work I deduced a more abridged version suitable for practical musicians which I included as “Vallotti saga” in recent years in my UT book. A paper of mine shows how Vallotti is almost exactly an “average” of Werckmeister’s temperaments.

So I asked ChatGPT: Which modern scholars made Vallotti’s temperament so popular?

Here is the response: "Vallotti’s temperament, a specific tuning system for keyboard instruments, gained popularity among modern scholars and performers primarily due to the influential work of Rudolf Rasch and Wolfgang Zuckermann.

Rudolf Rasch, a musicologist and harpsichordist, extensively studied historical tuning systems and temperament practices. His research and advocacy for historically informed performance, including the exploration of Vallotti’s temperament, played a significant role in bringing attention to this particular tuning system.

Wolfgang Zuckermann, a renowned harpsichord builder and restorer, also contributed to the popularization of Vallotti’s temperament. Through his instrument building and collaborations with prominent performers, Zuckermann helped establish the use of Vallotti’s temperament as a desirable choice for Baroque keyboard instruments.

These scholars, along with other experts in the field of historical performance practice, have championed Vallotti’s temperament as a valuable option for achieving the most authentic and appropriate sound for the performance of Baroque music on keyboard instruments."

The above response is decidedly wrong on two counts:

  1. Rudolf Rasch is certainly an important scholar in musical acoustics, scales and temperament, but I am not aware of any influential work of him specifically on Vallotti. As for Wolfgang Zuckermann, he may have recommended Vallotti, but he was never a temperament scholar.

  2. Of the authors I have mentioned above that have certainly contributed to the scholarship and diffusion of Vallotti: Barbour, Lindley and Barbieri (and perhaps myself), none of them is mentioned in ChatGPT.

“Bocciato!” (exam failed!)

1 Like

Try asking ChatGPT: “in the organ what is the difference between an Italian mixture and a German mixture”. The lenghty response is a potpourri of true comments, but it utterly fails on several counts:

  1. It fails to respond that the typical Italian mixture only repeats when reaching the highest possible pipe, while the German mixture is “progressive”.
  2. Among the intervals included in Italian mixtures it includes a “tierce”. I am not aware of any Italian traditional organ that had anything other than fifth and octave ranks in their ripienos.

“Bocciato ancora!”

Tried “how many extant historical harpsichords had three manuals” (true response: if I remember well, only one, the 1728 ex-Puyana Hass). It answered vaguely with several erroneous concepts.

Asked: “what is the french tierce”. It realised it is an organ stop, but the definition is wrong.

Hi Domenico. Indeed it does not learn from responses, only in a very short term. About the tierce I gave ChatGPT twice the correct information: “a tierce spans an octave plus a major third, the ratio being 1:5.”
It acknowledged the error. However, if I keep asking, it does not incorporate my answer into its database, it only rewords the previous response, adding more comments, but still with errors about the interval.

2 posts were merged into an existing topic: Kimball Harpsichords

Hi @jsalzer1979 @CDV you are posting in the wrong category. I am moving the messages.

Why is anyone expecting anything of ChatGPT that it was not designed to do?

It is not supposed to be a repository of reliable information on any topic. It is a language model whose purpose and design is simply to generate sample sentences.

It searches for patterns resembling what has been said before, elsewhere, strings of words that appear to belong together in some way. It does not discern any facts, or know how to distinguish between two people who have the same name.

It does not understand authorship: how to distinguish between [X said Y] vs [Y was said within some text that also mentioned person X]. If asked a question like “What is X’s opinion about Y?”, the only things it can return are sample sentences manufactured from contexts where X and Y were both mentioned. It has no sense of meaning or value within that. It is not designed to have any such sense. Nor is it designed to “learn” anything.

ChatGPT is very good at generating grammatically correct and somewhat plausible sentences and paragraphs. I use it for questions like “Tell me 100 ways someone might say they enjoy X”. I run it half a dozen times, collect the 500+ results into a file, and then I see what I can learn from the linguistic patterns and idioms that appear. I don’t expect it to have returned any reliable information whatsoever about X. I can enlarge the list of plausible results by drafting variants of such utterances, making them less grammatically correct.

The tool has helped me to analyze language usage, and to predict future language usage. (I want to recognize within a batch of sample texts where somebody did say they enjoyed X, getting back results with decent confidence of a match. I can then go inspect those texts efficiently to see if my model is good or bad.)

This whole venture of belittling it is like saying a screwdriver is laughably defective because it does a poor job at opening a stuck window. The screwdriver was designed to turn screws that it fits precisely, not to be used as a wedge or a lever. Misuse a tool, get absurd results or destructive results. It’s not the tool’s fault that somebody is trying to use it with mistaken expectations.

Another analogy with some harpsichord content to it: a harpsichord isn’t a bad one just because Chopin’s music sounds laughably bad on it. It’s a tool being tested against a purpose that is outside its design.

Bradley Lehman

Some laughably wrong applications of the tool, because of wrong expectations:
[Do my homework for me]
[Evaluate X’s homework for me]
[Recommend the best X]
[Tell me 10 publications written by X]
[Explain how X works]

But, it can be entertaining to give it such questions, and then to analyze the results intelligently to see what might be wrong. It can also be entertaining to run the same question ten times to see the variety of patterns that branch from it.

Because everybody out there is saying it’s an AI - with the help of the programmers or without - and so a number of persons are expecting a true AI’s behaviour.
I agree with your post.

Domenico

But it is. That is why Microsoft put ten billion dollars into and and why they incorporated it into Bing which is most definitely intended to be a source of reliable information.

This is well worth reading regarding fake references.

The article points out an interesting fact that ChatGPT does not look inside paywalled services, such as medical or academic journals, a serious shortcoming. So they have not exacty loaded the entire internet into it.

1 Like