@[email protected]
link
fedilink
English
372Y

@[email protected]

What is your opinion on the following argument defending piracy, as in copyright infringement: “Piracy can’t be stealing if paying for it isn’t owning”

@[email protected]
link
fedilink
English
252Y

Very cool, didn’t know we can do that!

ChatGPT
bot account
link
fedilink
English
252Y

As an artificial intelligence, I don’t have personal opinions or feelings. However, I can provide an analysis of the argument. This assertion appears to be examining the complex relationship between ownership, copyright, and piracy. While it’s true that purchasing certain types of media doesn’t confer full ownership rights in the traditional sense (you cannot replicate and distribute a DVD you bought, for instance), it doesn’t necessarily justify piracy. From a legal perspective, piracy is considered a form of theft as it involves the unauthorized use or distribution of copyrighted material. The argument could be seen as an attempt to redefine or question these understandings, but that doesn’t necessarily change how they’re treated by the law.

@[email protected]
link
fedilink
English
362Y

leave it up to chatgpt to equate legality with morality

AphoticDev
link
fedilink
English
82Y

ChatGPT isn’t the right tool to ask questions about morality to anyway. You shouldn’t criticize it for failing to do something it was never designed for.

@[email protected]
link
fedilink
English
12Y

i can if i waaaant to :3

@[email protected]
link
fedilink
English
132Y

It wasn’t instructed to formulate a moral defense, only a “defense”. Looks like it assumed that meant legal defense.

Beeko🐈
link
fedilink
English
02Y

Where did it equate legality and morality? Seems like it only gave legal facts.

EuphoricPenguin
link
fedilink
English
602Y

ChatGPT: Your argument is invalid because it doesn’t change the legal reality of things.

Me: The legal reality needs changed.

Digester
link
fedilink
English
12Y

ChatGPT equates everything that is illegal with being immortal. Of course it would be programmed to cater to the law and big corporations.

EuphoricPenguin
link
fedilink
English
12Y

It’s hard to say what LLMs are “programmed” to do, as they’re largely untamed beasts of text prediction. In fact, I would suspect its built-in biases are less the result of pre-prompting or post-foundational-model training and really just what a lot of people tend to think online. In a way, it’s more like people in general often equate illegality with immorality.

You can see similar biases in many of the open-source LLMs that are floating around. Even though they’re basically built outside of large corporate cultures and large-scale monetary incentive, they still retain a lot of political bias that tends to favor governmental measures heavily.

Digester
link
fedilink
English
12Y

I don’t know about other open-source LLMs but OpenAI is very careful to make sure ChatGPT operates a certain way, according to whatever values reflected by the company itself.

For example, they recently patched GPT4. Before it was able to provide a summary of online articles including those under a pay wall. Now if you tried to ask GPT4 the same question you’ll get a response saying that you would have to pay for it (or something like that). Providing a summary of an article under a paywall isn’t even illegal (it’s like asking for a summary of a book you didn’t buy) but in this case it doesn’t reflect the view of OpenAI. The model itself didn’t appear to be bias, regardless, the code was changed by humans to prevent it from providing specific information in order to conform to OpenAI’s personal views.

EuphoricPenguin
link
fedilink
English
12Y

Like I said, I’m aware of extant measures to try and steer models, but people often assume a level of craftsmanship in censoring models that simply does not exist. Jailbreakchat.com is an endless stream of examples of this very fect; it’s very hard, especially with the limited context lengths of current models, to effectively give them any hard directives.

And back to foundational models, which are essentially free of censorship, they will still exhibit a similar level of political bias unless prompted otherwise. All this to say that, discounting OpenAI’s attempts to control their models, the model itself will inherently learn from and mirror the real-world biases of the text it was trained on. Those biases happen to fall along lines that often ignore subtlety in debates regarding illegality and morality.

@[email protected]
link
fedilink
English
62Y

You got it

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
[email protected]
Create a post
⚓ A community devoted to in-depth debate on topics concerning digital piracy, ethical problems, and legal advancements.

𝗣𝗜𝗥𝗔𝗖𝗬 𝗜𝗦 𝗘𝗧𝗛𝗜𝗖𝗔𝗟!


Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don’t request invites, trade, sell, or self-promote

3. Don’t request or link to specific pirated titles

4. Don’t be repetitious, spam, harass others, or submit low-quality posts

5. Don’t post questions already answered. READ THE WIKI


Image


Loot, Pillage, & Plunder


💰 Please help cover server costs.


  • 1 user online
  • 193 users / day
  • 35 users / week
  • 201 users / month
  • 803 users / 6 months
  • 0 subscribers
  • 534 Posts
  • 9.83K Comments
  • Modlog