Instead of jumping from 1 to 2 to 3, we move smoothly across all (typically real) numbers.
Obviously this would go to infinity almost every time because there are infinite real numbers between any two distinct real numbers. So instead, we merge it into a bunch of skinny rectangles with their bottom on the x axis and the top at the value of the function for the start of the rectangle. As we shrink the width of the rectangles, it approaches the continuous notion.
Continuous means “smooth” - there are no jumps
Discrete means there are jump
Short answer: Imagine that the integer used in the for loop is a float instead.
Longer, a bit more precise answer: An integer can only have discrete values (i.e. -1, 0, 1, 2, …, 69, … etc.)
A real number (~float with infinite precision) can have an infinite amount of values between two discrete values.
An integral is, to put it simpy, a sum of all the results of taking those infinite values between two discrete values (an interval) and feeding them to the given function.
It’s a for loop over an infinite set of real numbers rather than over a finite set of integers => a non-discrete for loop
The hard part of math isn’t understanding esoteric symbols it’s the theory behind it and it’s application. Number theory will mindbreak almost all people.
Number theory and higher levels of math are a completely different beast. Once your exam is over 50% just writing proofs you will change your tune. Unless you are built for it.
Yea that’s not explained better than a math teach. They just swapped notation common in math, for notation common in one specific programming language. it’s only easier for the audience who happens to be familiar with programming in general, and that language in particular.
I think you’d be hard pressed to find someone with any sort of programming background, even just as a hobbyist, who doesn’t understand that for loop notation, whether or not they know the specific language it’s from. (I couldn’t even tell you what specific language that’s from, because that notation matches so many different ones.)
I have a 15 year old son; he definitely has not seen summation in math classes yet, but he has far more than enough programming experience (even just from school) to understand the for loop.
People who are arguing that one way of expressing these concepts is easier to learn/understand than the other are missing the whole point. Mathematical notation was not designed to teach students how to do math or explain how to design algorithms. It was invented to communicate precise, abstract ideas concisely between mathematicians who already understand what the symbols mean.
Mathematicians require a notation that has the flexibility to manipulate mathematical objects/symbols in a way that naturally emphasizes their properties and relationships. Often they don’t even care whether the objects they’re studying are even computable or have a numerical representation. They just need them to have certain properties so that they can be manipulated appropriately.
Discrete sums are a rare example of when the mathematical notation overlaps with the description of an algorithm for computing its value (and the overlap is not even complete; infinite sums are easily represented in math notation but are practically uncomputable when implemented naively). Every other advanced mathematical concept puts a premium on ease of symbol manipulation over computability: integrals, derivatives, matrix multiplication, abstract algebra, etc.
TL;DR math notation is complex because its intended audience is people who already understand it, want maximum flexibility of symbol manipulation, and historically didn’t really care about practical computation.
You are right the symbols weren’t created so students can learn them, but students have to learn them at one point and for me personally, a student that knows how to program, figuring out that these symbols kind of represent for loops made them easier to understand.
This isn’t even god tier, it’s just that more people are familiar with the basics of programming than higher level math, which is honestly a good thing.
I remember how confused I was when I first encountered i=i+1… like, what 🤨? How can this be correct, this thing has to be wrong… and then you start seing the logic behind it and you’re like “oooh, yeah, that seems to work… but still, this is wrong on almost every level in math”… and then you grow a bit older and realize that coding has nothing to do with math, instead it’s got everything to do with problem solving. If you like to name your variables peach, grape, c*nt, you can, and if that helps you solve the problem, even better, just make it work, i.e. solve the problem 🤷.
1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that “a monad is a monoid in the category of endofunctors, what’s the problem?”
Some other languages like e.g. Rust also use monads. The point I was trying to make humorously was that many programming languages sometimes do use math concepts, sometimes even very abstract maths (like monads), and while it’s not maths per se, programming and computer science in general can have quite a bit to do with maths sometimes.
Yeah, I get what you’re trying to say now 😉. Still, they’re mostly used when doing algos, which in real world practical examples is almost never. We do all sorts of repetitive things, like sorting or user input blocks, but new algos is… something that you might do in NASA, CERN, Wall Street, not your every day programming job. Sure, you might optimize a thing or two here and there, but that’s about it 🤷.
But isn:t that kinda true for most things? If you go down deep enough, amost all tasks end up in physics und thus maths somewhere. But if I’m stacking shelves, I don’t care that there are some pretty complicated mathy physics things that determine how much weight I can stack on the shelf. I just stack it.
That’s kinda how most of programming is related to maths. Yeah, math makes it all run, but I mostly just see maybe a little algebra and very simple boolean logic.
And the rest of my work is following best practices and trying to make sense of requirements.
you don’t need to worry about the load capacity of the shelf, but only because somebody else already engineered it to be sufficient for the expected load. i’d argue that you aren’t the coder in this analogy, you’re the end user.
But how often, as a coder, are you going low-level?
If I want to sort a list, I don’t invent a sorting algo.
I don’t even code out a known sorting algo.
I just type .sort(), and I don’t even care which algo is used.
Same with most other things. Thinking about different kinds of lists/maps/sets is something you do in university.
In reality, many languages (like e.g. Python) don’t even give you the choice. There are List(), Map(), Set() and that’s it. And even on languages like Java, everybody just goes for ArrayList, HashMap and HashSet. Can’t remember a single time since university where I was like “You know what I’d fancy now? A LinkedList.”
I honestly don’t even know if Java offers any Map/Set implementations that don’t use hash buckets.
And even of boolean logic we only use a fraction. We use and, or, not and equals. We don’t use nand, nor, identity, xor, both material conditional variants, material biconditional or their negations.
That’s advanced calculus, and my guess is, those notations were made up to give rise to a new field in math, which has more to do with computers than math, so I don’t think that counts.
Computation theory, but that’s not math as in regular math. It’s just a fancy way of expressing how things inside a computer work, so we can actually make better versions of it. You just have to express it somehow in math terms.
It’s like saying engineers use math all the time. No, they don’t. We use simple aproximations of what is actually happening to dumb down the problem, cuz, it does the job nicely and no one will notice the difference between what we used, a simple aproximation, and the real thing, a full blown advanced calculus model of the thing we’re working on.
Definitely, although I’m sure that under the hood it’s all the same. Some (albeit high-level) languages also support a sum function that takes a generator as an input, which seems pretty close to this math notation.
The education system creates scarcity of knowledge to increase the profit of investment and spending, everything complex can be broken down into simple forms.
Everything dealing with capitalism ends up sounding like a conspiracy theory. You’re like “of course people wouldn’t actually take this thing we, as humans, need and sell it,” when suddenly air has been commodified and those who can’t afford it are dlseen as not deserving of air.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]
Rules:
Be civil and nice.
Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
Which makes the integral sign ∫ a non-discrete for-loop
That does not help. What does non-discrete mean?
Continuous.
Instead of jumping from 1 to 2 to 3, we move smoothly across all (typically real) numbers. Obviously this would go to infinity almost every time because there are infinite real numbers between any two distinct real numbers. So instead, we merge it into a bunch of skinny rectangles with their bottom on the x axis and the top at the value of the function for the start of the rectangle. As we shrink the width of the rectangles, it approaches the continuous notion.
Continuous means “smooth” - there are no jumps Discrete means there are jump
Short answer: Imagine that the integer used in the for loop is a float instead.
Longer, a bit more precise answer: An integer can only have discrete values (i.e. -1, 0, 1, 2, …, 69, … etc.)
A real number (~float with infinite precision) can have an infinite amount of values between two discrete values.
An integral is, to put it simpy, a sum of all the results of taking those infinite values between two discrete values (an interval) and feeding them to the given function.
It’s a for loop over an infinite set of real numbers rather than over a finite set of integers => a non-discrete for loop
if you take a modular approach and allow different measures to be used, it also lets the integral sign be a discrete for-loop
Oh cool, I know who this person is, she did a couple of amazing videos on Bezier curves and splines
Here is an alternative Piped link(s): https://piped.video/aVwxzDHniEw
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Better then
It’s about math teachers, not English teachers.
xor
The hard part of math isn’t understanding esoteric symbols it’s the theory behind it and it’s application. Number theory will mindbreak almost all people.
The hardest thing for me about math was the symbols. Greek, Roman, English.
Once you get past that, the numbers are easy.
Number theory and higher levels of math are a completely different beast. Once your exam is over 50% just writing proofs you will change your tune. Unless you are built for it.
Yea that’s not explained better than a math teach. They just swapped notation common in math, for notation common in one specific programming language. it’s only easier for the audience who happens to be familiar with programming in general, and that language in particular.
I think the concept of a for loop is easier to learn, even for non-programmers, as biased as I may be.
I think you’d be hard pressed to find someone with any sort of programming background, even just as a hobbyist, who doesn’t understand that for loop notation, whether or not they know the specific language it’s from. (I couldn’t even tell you what specific language that’s from, because that notation matches so many different ones.)
I have a 15 year old son; he definitely has not seen summation in math classes yet, but he has far more than enough programming experience (even just from school) to understand the for loop.
I think its Java.
Could also be Javascript or C#.
Or C or C++
Java/C# would have types before the variables:
Only if they’re declared in the snippet.
It’s any C derivative language.
People who are arguing that one way of expressing these concepts is easier to learn/understand than the other are missing the whole point. Mathematical notation was not designed to teach students how to do math or explain how to design algorithms. It was invented to communicate precise, abstract ideas concisely between mathematicians who already understand what the symbols mean.
Mathematicians require a notation that has the flexibility to manipulate mathematical objects/symbols in a way that naturally emphasizes their properties and relationships. Often they don’t even care whether the objects they’re studying are even computable or have a numerical representation. They just need them to have certain properties so that they can be manipulated appropriately.
Discrete sums are a rare example of when the mathematical notation overlaps with the description of an algorithm for computing its value (and the overlap is not even complete; infinite sums are easily represented in math notation but are practically uncomputable when implemented naively). Every other advanced mathematical concept puts a premium on ease of symbol manipulation over computability: integrals, derivatives, matrix multiplication, abstract algebra, etc.
TL;DR math notation is complex because its intended audience is people who already understand it, want maximum flexibility of symbol manipulation, and historically didn’t really care about practical computation.
deleted by creator
You are right the symbols weren’t created so students can learn them, but students have to learn them at one point and for me personally, a student that knows how to program, figuring out that these symbols kind of represent for loops made them easier to understand.
Wow, this is by far the clearest I’ve ever seen this explained.
You can reduce this readable code into one line of confusing python list comprehension that runs 100x slower!
I don’t think you can use python list comprehensions in this case, since you don’t want a new list, but rather reduce it to a single value.
What’s wrong with list comprehensions? Do I just have Stockholm Syndrome at this point?
I would skip the square brackets and just use a generator expression:
sum(3*n for n in range(5))
.Yes, the classic readability of c style for loops.
How about some Haskell
let numbers = [1, 2, 3, 4, 5] let sumOfNumbers = sum numbers
i still don’t understand but thanks
Ok but this is a bit of an unfair comparison given that Freya is pretty god tier at actually explaining math things.
Her videos about splines are god-tier
Went to look for the splines video and i already watched it? and her other videos i do not remember binging this
Here is an alternative Piped link(s): https://piped.video/watch?v=jvPPXbo87ds
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
This isn’t even god tier, it’s just that more people are familiar with the basics of programming than higher level math, which is honestly a good thing.
Maybe not this, but her video on splines is amazing.
I’m amazed people in here are calling a summation higher level math. Apparently my school experience was way different than a lot of other people’s.
Freya is a really good programming maths communicator so it doesn’t surprise me
Not knowing about Splines before
Feeling like understanding Splines afterwards 🥰
she spline on my bézier curve til I G¹ continuity
I remember how confused I was when I first encountered i=i+1… like, what 🤨? How can this be correct, this thing has to be wrong… and then you start seing the logic behind it and you’re like “oooh, yeah, that seems to work… but still, this is wrong on almost every level in math”… and then you grow a bit older and realize that coding has nothing to do with math, instead it’s got everything to do with problem solving. If you like to name your variables peach, grape, c*nt, you can, and if that helps you solve the problem, even better, just make it work, i.e. solve the problem 🤷.
A monad is just a monoid in the category of endofunctors, what’s the problem?
I’m not that good of a coder or mathematitian to know what that quote means 😂😀.
It’s from a longer quote in “A Brief, Incomplete and Mostly Wrong History of Programming Languages” about the language Haskell:
Some other languages like e.g. Rust also use monads. The point I was trying to make humorously was that many programming languages sometimes do use math concepts, sometimes even very abstract maths (like monads), and while it’s not maths per se, programming and computer science in general can have quite a bit to do with maths sometimes.
Yeah, I get what you’re trying to say now 😉. Still, they’re mostly used when doing algos, which in real world practical examples is almost never. We do all sorts of repetitive things, like sorting or user input blocks, but new algos is… something that you might do in NASA, CERN, Wall Street, not your every day programming job. Sure, you might optimize a thing or two here and there, but that’s about it 🤷.
Wait until you realize what math is all about
I think I do understand, but I’d rather embarres myself 😂.
Coding has nothing to do with math yet the entire basis of computing and programming is Boolean algebra.
But isn:t that kinda true for most things? If you go down deep enough, amost all tasks end up in physics und thus maths somewhere. But if I’m stacking shelves, I don’t care that there are some pretty complicated mathy physics things that determine how much weight I can stack on the shelf. I just stack it.
That’s kinda how most of programming is related to maths. Yeah, math makes it all run, but I mostly just see maybe a little algebra and very simple boolean logic.
And the rest of my work is following best practices and trying to make sense of requirements.
you don’t need to worry about the load capacity of the shelf, but only because somebody else already engineered it to be sufficient for the expected load. i’d argue that you aren’t the coder in this analogy, you’re the end user.
But how often, as a coder, are you going low-level?
If I want to sort a list, I don’t invent a sorting algo.
I don’t even code out a known sorting algo.
I just type
.sort()
, and I don’t even care which algo is used.Same with most other things. Thinking about different kinds of lists/maps/sets is something you do in university.
In reality, many languages (like e.g. Python) don’t even give you the choice. There are
List(), Map(), Set()
and that’s it. And even on languages like Java, everybody just goes for ArrayList, HashMap and HashSet. Can’t remember a single time since university where I was like “You know what I’d fancy now? A LinkedList.”I honestly don’t even know if Java offers any Map/Set implementations that don’t use hash buckets.
And even of boolean logic we only use a fraction. We use and, or, not and equals. We don’t use nand, nor, identity, xor, both material conditional variants, material biconditional or their negations.
This is what I was actually trying to say, thanks for elaborating 👍.
I meant as in real world applications, like how much math do you need to know to sort a table or search through an array.
I mean, coding does have to do with math, it’s usually just different notation. i = i + 1 in math notation is just i := i + 1.
That’s advanced calculus, and my guess is, those notations were made up to give rise to a new field in math, which has more to do with computers than math, so I don’t think that counts.
What discipline do you think Allan Turing and Von Neumann were in?
Computation theory, but that’s not math as in regular math. It’s just a fancy way of expressing how things inside a computer work, so we can actually make better versions of it. You just have to express it somehow in math terms.
It’s like saying engineers use math all the time. No, they don’t. We use simple aproximations of what is actually happening to dumb down the problem, cuz, it does the job nicely and no one will notice the difference between what we used, a simple aproximation, and the real thing, a full blown advanced calculus model of the thing we’re working on.
You mean they were not mathematics department professors?
Where?
Wouldn’t reducer be more precise?
Can you explain this out a bit more? I’m a self-taught programmer, of sorts, and I’m not quite getting this…
A reducer “reduces” a list of values to one value with some function by applying it to 2 values at the time.
For instance if you reduce the list [1, 2, 3] with the sum function you get (1 + (2 + 3)) = 6.
I think this is pretty much the imperative equivalent of
foldl (\acc i -> acc + 3*i) 0 [1..4]
.Definitely, although I’m sure that under the hood it’s all the same. Some (albeit high-level) languages also support a sum function that takes a generator as an input, which seems pretty close to this math notation.
The education system creates scarcity of knowledge to increase the profit of investment and spending, everything complex can be broken down into simple forms.
Sounds as a conspiracy theory
Everything dealing with capitalism ends up sounding like a conspiracy theory. You’re like “of course people wouldn’t actually take this thing we, as humans, need and sell it,” when suddenly air has been commodified and those who can’t afford it are dlseen as not deserving of air.
Fuck! Im 40 and this is the first time I understand the sigma sign!! Thank you!
Couldnt they just show this to me at 7th grade or something when i already learned pascal?
The sigma sign shows up as “sum” quite a bit but I didn’t know about the for-loop thing.
I was into coding (javascript) but nope they are unwilling to find creative new ways to help teach people, gotta be a nonseical “one size fits all”