Page 13 of 30 FirstFirst ... 3111213141523 ... LastLast
Results 481 to 520 of 1199

Thread: Computer Science and Math and Stuff

  1. #481
    ^^vv<><>BASTART
    Posts
    7,768
    So I delved a bit further into that post, and some things became apparent to me after reading a bit more about Paul Taylor, that thread and his project.

    First, there's a great comment thread near the top, starting with this comment. The question was posed: "What is this obsession with trying to get rid of sets?" There is a minor attempt at a reply, but overall the theme of the discussion seems to be: there really isn't a good reason to be antiset.

    But that leads me back to Paul Taylor. Paul Taylor isn't a moron, he is a competent mathematician far beyond my abilities. But, after checking his other replies, it's definitely true that he is a finitist. As you pointed out, he said:

    This is not dogmatic Finitism
    Which is apt. He's not a Wildberger, but it's clear from reading posts like this that he's concerned himself primarily with constructivist-only, finitist mathematics. Which is fine, I've never claimed this method is wrong, but his program of type theory based mathematics appears to be based somewhat in his finitist beliefs. And while he does get cited in the HTT foundational text, I can tell you that HTT is not itself concerned with this bias of mathematics, I know e.g. the person I've talked with here (and who I might work with) is also cited and is not biased towards constructivism or finitism. Point being, his work in type theory does not appear to be a takedown of axiomatic set theory, at least not from the point of view of others who do HTT and work in fields closely related to his.

    It turns out (finitistic) set theory, type theory and category theory form equivalent systems of mathematics up to a point:

    Quote Originally Posted by http://www.andrew.cmu.edu/user/awodey/preprints/stcsFinal.pdf
    The first and most obvious conclusion to be drawn from this is that the three systems of foundations are therefore mathematically equivalent. Elementary set theory at least as strong as our basic theory BIST, type theory in the form of higher-order logic, and category theory as represented by the notion of a topos, all permit the same mathematical definitions, constructions, and theorems—to the extent that these do not depend on the specifics of any one system. This is perhaps the definition of the “mathematical content” of a system of foundations, i.e. those definitions, theorems, etc. that are independent of the specific technical machinery, that are invariant under a change of foundational schemes. The very constructions that we have been discussing, for instance, in order to be carried out precisely, would have to be formulated in some background theory; but what should that be? Any of the three systems themselves would do for this purpose, and the results we have mentioned would not depend on the choice.
    And moreover, it's true that a huge amount of mathematics can be formulated in any of these three (equivalent) systems without invoking the axiom of infinity (a set theory dependent axiom). I'm just not sure what this.. gains you..

  2. #482
    That was quite kind of you Reid to oblige my interests and provide your take on this stuff.

    My understanding is that Paul Taylor believes that categories are more ergonomic than sets, and that he believes that trying to encode the important bits of a mathematical statement in set theory is unnecessarily burdensome.

  3. #483
    ^^vv<><>BASTART
    Posts
    7,768
    Point being, it seems Paul Taylor has an antiset bias, which can only mystify anyone if type theory and set theory are equivalent foundations pre-infinity, other than that, finitism itself doesn't seem super important as a mathematical theory itself.

    I don't know if there's an equivalent to the axiom of infinity in type theory. That'd be an interesting question.

  4. #484
    I am not sure that the differences between Taylor's formulation of mathematical foundations is so similar to set theory, but I'll be to read more about it to be sure. I would have assumed that the language he chooses naturally lends itself to constructive proofs, which can optionally be made sufficiently rigorous to be checked with a proof checker. But I'll admit I was somewhat assuming this would be the case.

  5. #485
    Also, I am rather unfazed about him being antiset. If you're going to try to formulate a language which lends itself to avoiding some of the pathologies of set theory, this is a start, no?

  6. #486
    Quote Originally Posted by Reid View Post
    Point being, it seems Paul Taylor has an antiset bias, which can only mystify anyone if type theory and set theory are equivalent foundations pre-infinity, other than that, finitism itself doesn't seem super important as a mathematical theory itself.

    I don't know if there's an equivalent to the axiom of infinity in type theory. That'd be an interesting question.
    These are definitely good questions that I should be able to answer, but haven't done my homework. As soon as I have, I will make it an opportunity to practice my teaching!

  7. #487
    If you look at Taylor's home page, I think the one line description of Abstract Stone Duality fairly well illuminates the thrust of his research program: it attempts to directly axiomatize real analysis in a way that is inherently computable.

    This is not a property enjoyed by the language of set theory: it is trivial to write things in set theory which are undecidable. It's simply too powerful, and too low level to expect to behave predictably without extra care! The point of Taylor's work is that if we do some of this work up front and bake it into our formalism, then we'll have built a more robust tool for ourselves in the task of encoding mathematics.

    From the blurb for his older book, Practical Foundations of Mathematics , it is clear that he really has two objectives in mind: first, the direct encoding of idioms of mathematics as understood according to their practical use by mathematicians, and not simply how they managed to encode their work in set theory. And second, the machine representation of mathematics. These two goals (and I may add that they are highly ambitious, as well as being quite general, despite the necessary shades of finitism you might see in them) are clearly evident in his stated goals of Abstract Stone Duality.

    Practical Foundations collects the methods of construction of the objects of twentieth century mathematics, teasing out the logical structure that underpins the informal way in which mathematicians actually argue.
    Although it is mainly concerned with a framework essentially equivalent to intuitionistic ZF, the book looks forward to more subtle bases in categorical type theory and the machine representation of mathematics. Each idea is illustrated by wide-ranging examples, and followed critically along its natural path, transcending disciplinary boundaries between universal algebra, type theory, category theory, set theory, sheaf theory, topology and programming.
    The first three chapters will be essential reading for the design of courses in discrete mathematics and reasoning, especially for the "box method" of proof taught successfully to first year informatics students.
    Chapters IV, V and VII provide an introduction to categorical logic. Chapter VI is a new approach to term algebras, induction and recursion, which have hitherto only been treated either naively or with set theory. The last two chapters treat dependent types, fibrations and the categorical notion of quantification, proving in detail the equivalence of types and categories.
    The twin themes of structural recursion adjoint functors pervade the whole book. They are linked by a new construction of the syntactic category of a theory.
    Students and teachers of computing, mathematics and philosophy will find this book both readable and of lasting value as a reference work.
    Last edited by Reverend Jones; 03-13-2018 at 02:34 AM.

  8. #488
    Perhaps it makes sense to think if this work as a flavor of constructivism? But which concerns itself largely with issues of language and encoding, rather than setting out on some grandiose research project to recast all of mathematics according to some strict philosophically motivated subset of classical mathematics (like intuitionism attempted about a century ago, and failed).

    I don't know if you consider computer scientists to be "finitist", but I see Paul Taylor very much as attempting to cast as much classical mathematics as he possibly capture in language which machines can understand well enough for the programmer / mathematician to say that s/he has adequately represented her/his work... and this is very much in the vein of computer science. Presumably this lends itself to making said mathematics inherently computable (can you now maybe see why some of the things I or Taylor have said come across as arrogant towards classical mathematicians? This stuff is wildly ambitious when you think about what it's stated goals represent), although the mere act of representing the mathematics in a language that lends itself to computability already ought to reap benefits in terms of clarity and rigor (though I will be the first to admit that this is merely a goal, and I would have no way of supporting this claim). Fred Brooks, a venerable and great programmer, has said that all programming is representation. Paul Taylor (from what I can tell) is trying to confer some of the benefits of programming to a large chunk of classical mathematics.

    Now, there is also a connection to proof checkers (the book goes into dependent types, used in proof checkers). Presumably, one could go further, not only adopting Taylor's formalism, but going further and explicitly encoding their mathematics in some specific computer language that can be checked by a tool like coq. This is even more work than simply speaking the language promoted by Taylor in his book, of course, since this is an actual implementation of a specific type checking system, with all the quirks and complexities that come with an actual implementation. But I imagine that future incarnations of proof checkers will become more and more ergonomic.
    Last edited by Reverend Jones; 03-13-2018 at 02:58 AM.

  9. #489
    Admiral of Awesome
    Posts
    16,862
    problem: found bug in theorem
    solution: formalize mathematics

    problem: found bug in proof
    solution: write a computer program to automatically verify proofs

    problem: found bug in theorem prover

  10. #490
    Regression testing, maybe? After all, it's software.

  11. #491
    ^^vv<><>BASTART
    Posts
    7,768
    Quote Originally Posted by Reverend Jones View Post
    If you look at Taylor's home page, I think the one line description of Abstract Stone Duality fairly well illuminates the thrust of his research program: it attempts to directly axiomatize real analysis in a way that is inherently computable.
    I'm not sure on the specifics, but IIRC constructive methods of analysis are really limiting. For instance, Brouwer later went on to believe all functions R to R are continuous, and constructivist attempts to ground analysis can't disprove this idea. More on this later, if I find the time.

  12. #492
    I was a little wrong to suppose that Taylor's book Practical Foundations is a constructivist work, per se. Taylor cites the constructivist mathematics of Bishop as a kindred spirit. Thumbing through my copy, it is apparent that he is less interested in simply blasting through various chapters of analysis (or some such topic from classical mathematics), a la Bishop, than he is in making use of intuitionist logic as a sort of skeleton with which to tease out some structure from the mathematical language that is missed by classical mathematics. In other words, he is less interested in prescription of what mathematical proofs are "acceptable", and more interested in direct constructions of patterns that mathematicians had been implicitly relying upon for intuition, but were failing to make explicit, because classical logic wasn't detailed enough. In this sense, the work is a kindred spirit of category theory (and indeed uses it throughout), in the sense that it was category theory, not set theory, with which mathematicians finally found themselves at home with a language would readily encode the structure of a mathematical theory at arbitrary scales of abstraction. Raw set theory is too heavy-weight for mathematicians to bother doing this, and Bourbaki's "structures" that intended to provide a more ergonomic abstract framework for doing this with material fashioned from naive set theory (and which may be seen as a precursor to category theory), to my knowledge, never caught on in a big way.

    He makes this much clear in the introduction. In particular, note the final paragraph.

    Foundations have acquired a bad name amongst mathematicians, because of the reductionist claim analogous to saying that the atomic chemistry of carbon, hydrogen, oxygen and nitrogen is enough to understand biology. Worse than this: whereas these elements are known with no question to be fundamental to life, the membership relation and the Sheffer stroke have no similar status in mathematics.

    Our subject should be concerned with the basic idioms of argument and construction in mathematics and programming, and seek to explain these (as fundamental physics does) in terms of more general and basic phenomena. This is "discrete math for grown-ups."

    A moderate form of the logicist thesis is established in the tradition from Weierstrass to Bourbaki, that mathematical discussions consist of the manipulation of assertions built using ∧, ∨, , ⇒ and ∃. We shall show how the way in which mathematicians (and programmers) - rather than logicians - conduct such discussions really does correspond to a certain semi-formal system of proof in (intuitionistic) predicate calculus. The working mathematician who is aware of this correspondence will be more likely to make valid arguments, that others are able to follow. Automated deduction is still in its infancy, but such awareness may also be expected to help with computer-assisted construction, verification and dissemination of proofs.

    One of the more absurd claims of extreme logicism was the reduction of the natural numbers to the predicate calculus. Now we have a richer view of what constitutes logic, based on a powerful analogy between types and propositions. In classical logic, as in classical physics, particles enact a logical script, but neither they nor the stage on which they perform are permanently altered by the experience. In the modern view, matter and its activity are created together, and are interchangeable (the observer also affects the experiment by the strength of the meta-logic).

    This analogy, which also makes algebra, induction and recursion part of logic, is a structural part of this book, in that we always treat the simpler propositional or order-theoretic version of a phenomenon as well as the type or categorical form.

    Besides this and the classical symmetry between ∧ and ∨, and between ∀ and ∃, the modern rules of logic exhibit one between introduction (proof, element) and elimination (consequence, function). These rules are part of an even wider picture, being examples of adjunctions.

    This suggests a new understanding of foundations apart from the mere codification of mathematics in logical scripture. When the connectives and quantifiers have been characterised as (universal) properties of mathematical structures, we can ask what other structures admit these properties. Doing this for coproducts in particular reveals rather a lot of the elementary theory of algebra and topology. We also look for function spaces and universal quantifiers among topological spaces.

    A large part of this book is category theory, but that is because for many applications this seems to me to be the most efficient heuristic tool for investigating structure, and comparing it in different examples. Plainly we all write mathematics in a symbolic fashion, so there is a need for fluent translations that render symbols and diagrams as part of the same language. However, it is not enough to see recursion as an example of the adjoint functor theorem, or the propositions-as-types analogy as a reflection of bicategories. We must also contrast the examples and give a full categorical account of symbolic notions like structural recursion.

    You should not regard this book as yet another foundational prescription. I have deliberately not given any special status to any particular formal system, whether ancient or modern, because I regard them as the vehicles of meaning, not its cargo. I actually believe the (moderate) logicist thesis less than most mathematicians do. This book is not intended to be synthetic, but analytic - to ask what mathematics requires of logic, with a view to starting again from scratch.
    Regarding the relationship of the work to the constructive mathematics of Bishop: Taylor addresses this in section 1.8 (but don't look at the electronic copy on his website, since the fonts are ****ed):

    Click image for larger version. 

Name:	1.png 
Views:	12 
Size:	157.9 KB 
ID:	27852

    Click image for larger version. 

Name:	2.jpg 
Views:	12 
Size:	199.6 KB 
ID:	27853

    Click image for larger version. 

Name:	3.jpg 
Views:	11 
Size:	130.4 KB 
ID:	27854
    Last edited by Reverend Jones; 03-13-2018 at 12:19 PM.

  13. #493
    Note that I fixed the fonts in two places in the first quoted passage, so that the logical connectives render properly.

  14. #494
    Having initiated this discussion with some over the top and blistering criticisms of some widely held set of assumptions (which, on the whole, nevertheless seem to have historically produced an impressive volume of productive work), I see now how I too easily opened myself up to attack for having promoted an "-ism", which initially covered up any potential discussion of the positive merits of a framework that takes the shortcomings highlighted by the original set of critiques seriously enough to provide insights that improve upon the strengths of the original system while also mending its excesses.

    Hold on. Are we talking about mathematics?

  15. #495
    Quote Originally Posted by Reid View Post
    I'm not sure on the specifics, but IIRC constructive methods of analysis are really limiting.
    Welcome to computer science, I guess? Though I'm not sure what is "limiting" about formalisms (like Abstract Stone Duality) that lend themselves to computation. Either you use them when doing so is expedient, or you don't. Really it's no different than any other algorithm (we don't stop using natural language on computers just because computers also allow us to also write formal programs).

    If you look at my longer post just a few minutes ago (where I screencapped the book pages), you'll see that Taylor regards instances in which the law of the excluded middle is such a convenience that we don't want to lose it, that nothing is lost by simply presuming whatever theorem you want to use in real analysis that depends on it, and call it an "idiom". (Edit: My use reference to Taylor's use of the word "idiom" is poor paraphrasing of his use, but I think my use of it here makes just enough sense to be roughly compatible.)

    The point is: nothing is lost by using both intuitionist and non-intuitionist mathematics together, just like nothing is lost by using formal and informal (e.g., computer and natural) languages together. The bigger proposition here is to refute the contention by classical mathematicians that nothing is gained. And Paul Taylor tries to refute this in his introduction (which I quoted as well in that post), and again we arrive at his two goals for the project:

    Practical Foundations collects the methods of construction of the objects of twentieth century mathematics, teasing out the logical structure that underpins the informal way in which mathematicians actually argue.

    Although it is mainly concerned with a framework essentially equivalent to intuitionistic ZF, the book looks forward to more subtle bases in categorical type theory and the machine representation of mathematics.
    Last edited by Reverend Jones; 03-13-2018 at 02:59 PM.

  16. #496
    Taylor introduces abstract stone duality as follows:

    Quote Originally Posted by Paul Taylor
    It is a revolutionary theory of topological spaces and continuous functions that treats them directly, just as traditional geometry was about lines and circles, without smashing the continuum into dust.
    In my view, the tl;dr of this subthread on axiomatic set theory versus these new category theory motivated foundations is: it's not that the axioms themselves in set theory can really be "wrong" after all. What turned out in fact to be wrong after all was said and done was the hope that axiomatic set theory itself wouldn't be ugly. I think the aesthetic merits of Paul Taylor's work make it a candidate for having more permanence (following Hardy's dictum that there is no permanent place for ugly mathematics) than foundations that pre-dated computers, category theory, and much of modern real analysis itself.

    Perhaps the line of thinking here is:
    1. category theory provides a language to capture the organizing aesthetic patterns of a mathematical theory
    2. whereas ugly representations lose sight of these aesthetic patterns
    3. "representation is the essence of programming" (Fred Brooks)
    4. those who seek to both program and represent mathematics at once using computers ought to first try and use the language of categories and types rather than committing set theoretic / logical reductionism
    5. over time the number of people doing so will improve, though perhaps never completely usurping the style of set theoretic classical mathematics initiated by Cantor.
    Last edited by Reverend Jones; 03-13-2018 at 01:42 PM.

  17. #497
    Quote Originally Posted by Reverend Jones View Post
    Having initiated this discussion with some over the top and blistering criticisms of some widely held set of assumptions (which, on the whole, nevertheless seem to have historically produced an impressive volume of productive work), I see now how I too easily opened myself up to attack for having promoted an "-ism", which initially covered up any potential discussion of the positive merits of a framework that takes the shortcomings highlighted by the original set of critiques seriously enough to provide insights that improve upon the strengths of the original system while also mending its excesses.

    Hold on. Are we talking about mathematics?
    Quote Originally Posted by Reverend Jones View Post
    set theoretic / logical reductionism
    Speaking of the charge of social-"ism" being bandied about by conservatives and their libertarian cousins / apologists for logical reductionism, remember Praxeology?
    Last edited by Reverend Jones; 03-13-2018 at 01:50 PM.

  18. #498
    That said, I'm pretty sure that Paul Taylor is probably just another utopian cult leader who is trying to start a revolution over some sobby idealism that will never work in practice.

  19. #499
    Going back to object-oriented programming, let me add that the reason I was led to believe that it had been "oversold" somehow was that while (as Jon`C mentioned) the paradigm is one conclusion of modular programming, at the same time it encourages stateful / imperative programming in a such a seductive way that people begin to use it for everything, perhaps to a fault. I am not sure if vanilla object-oriented programming has ever been reconciled with a less imperative style, and from my understanding, this is statefulness is pretty much intrinsic to it.

    Although maybe the problem wasn't that everybody needed to simply switch over to a different set of shackles (pure functional style), but that the original object oriented-programming paradigm simply needed to be supplemented with a concurrency model that wasn't simply threads. For example, coroutines in C++, or CSP in Go.

    I am aware that Scala is supposed to somehow 'reconcile' functional and object-oriented styles, but I am not a Scala programmer, and I haven't yet understood what this might mean.

  20. #500
    Admiral of Awesome
    Posts
    16,862
    Pure functional languages have state, it's just not global mutable state.

  21. #501
    Thinking back on this thread, I think I can conclude that until something comes along that mathematicians (classical or otherwise) can reliably rely upon to "run into" as they "stress test" their theorems: that is to say, look far and wide for contradictions--even if that means constructing "ugly" things like axiomatic set theory, so that it's turtles all the way down (set theoretic / logical "reductionism")--then classical foundations based on things like infinite sets and classical logic will be with us, since we don't have powerful enough tools to replace them, even if the leading candidates promise to be more reliable or precise.

    I suppose the real reason for the (perhaps seen as undue) emphasis on things like intuitionist logic / categories / types by folks such as myself, who prefer a more "combinatorial" setting to interpret and apply the content of the theorems we may be interested in (whether they originated in real analysis or something with far fewer intrinsically non-constructive pieces), is that for results that intrinsically depend on classical foundations, and for which we find ourselves unable to capture some structure of it in a language like category theory, then, well, we really don't know what to do with it, and can probably safely ignore it. Rota would have said that we have "different bottom lines", and to have philosophical arguments in which one side is castigated as "finitist", and the other as "reductionist", well, we would simply be arguing past one another.

    On some level, the combinatorialist has the secret thesis that all mathematics, while possibly arising in some existing theory with its own structure and language, can always be given a combinatorial interpretation. For example, while matrix algebra arose historically in this or that context, bringing along some canonical interpretation, combinatorialists still have uses for it that might benefit from a more flexible, neutral (though perhaps more complicated) interpretation.
    Last edited by Reverend Jones; 03-14-2018 at 03:36 PM. Reason: ****ed up a sentence in the middle paragraph

  22. #502
    Quote Originally Posted by Jon`C View Post
    Pure functional languages have state, it's just not global mutable state.
    So then might we say that while pure functional languages offer a sledge-hammer solution to the problem of shooting yourself in the foot with mutable state, the same, in principle, could be achieved with extra care about being responsible with mutable state in an imperative object-oriented programming language.

    But doesn't (and correct me if I am wrong) this somewhat belie what appears to me a general consensus that object-oriented programming language compilers have more difficulty in static analysis than those of functional languages such as Standard ML or Haskell? If it's harder for compilers to make simplifying assumptions in languages that give lots of freedom of what can be done with state, how can we trust humans to manage it well manually?

  23. #503
    Admiral of Awesome
    Posts
    16,862
    As far as I'm concerned the distinction doesn't meaningfully exist anymore. E.g.:

    Code:
    // Pure functional interface: no external side effects
    // AND object oriented:
    // - FnT is an abstract function type
    // - SeqCtrT is an abstract sequence container type
    template <class FnT, class SeqCtrT>
    SeqCtrT transform(FnT &&fn, SeqCtrT ctr)
    {
        // Imperative implementation reusing locally owned mutable state for efficiency
        for(auto &em : ctr) {
            em = fn(em);
        }
    
        return std::move(ctr);
    }
    
    // Everything else could be functional:
    
    template <class SeqCtrT>
    SeqCtrT add_one_to_each(SeqCtrT ctr)
    {
        // Functional but for explicitly forwarding local mutable state (optional but faster).
        return transform([](auto &&em) { return em + 1; }, std::forward<SeqCtrT>(ctr));
    }
    
    ...
    
    // Just functional
    return add_one_to_each(std::vector<int>{ 1, 2, 3, 4 });
    The main difference is that languages like C and C++ give you the first-class option of handling memory as (global or local) mutable state, instead of just devices.

  24. #504
    Interesting. Thanks.

    I suppose I've absorbed some outmoded or superficial ideas that conflate some historically prominant style of programming associated with different languages with assumptions of what the languages actually offer. If I can program in C++ just like I might want to in ML, then I should probably see for myself before opening my mouth.

  25. #505
    Well, maybe not just not like ML, if we are using objects to organize the program and don't have something like pattern matching. But, comparable in terms of safety, efficiency, and modularity. And apparently more flexible since as you showed we can choose which pieces we might want to keep pure.

  26. #506
    Admiral of Awesome
    Posts
    16,862
    Quote Originally Posted by Reverend Jones View Post
    So then might we say that while pure functional languages offer a sledge-hammer solution to the problem of shooting yourself in the foot with mutable state, the same, in principle, could be achieved with extra care about being responsible with mutable state in an imperative object-oriented programming language.
    No, it's that there is no silver bullet.

    The real world is global mutable state. Any interesting program has to interact with the real world, using what we call I/O. Haskell has I/O, which means it does have global state - it just doesn't let you treat memory as global state. Haskell I/O synchronization is no tidier than what you'd see in an imperative programming language; it comes pre-wrapped in a thread-safe interface, which is nice, but lower-level languages like C and C++ have eschewed this deliberately, rather than by necessity. Does Haskell prevent I/O data races? Sure, maybe - assuming there's no back door that people can abuse. Can using it prevent deadlocks, priority inversion, etc.? Of course not.

    The main difference is that C and C++ also let you (optionally) treat memory as global mutable state to which you may (optionally) synchronize access. That means you can (optionally) blow your foot off handling memory, in addition to obligatorily blowing your foot off while handling I/O and other system resources.

    But doesn't (and correct me if I am wrong) this somewhat belie what appears to me a general consensus that object-oriented programming language compilers have more difficulty in static analysis than those of functional languages such as Standard ML or Haskell?
    Global state can be abstracted away. Unconstrained abusive overuse of indirect function calls can't. I don't know who shares this consensus, but it's not anybody in the SAST industry AFAIK.

    how can we trust humans to manage it well manually?
    We can't.

  27. #507
    (Written before Jon's last comment.)

    Now what's a little interesting to me is that object-oriented programmers seem to be in a sort of turf war with functional programmers. For example, I did a quick search for 'pattern matching' and 'C++", and got information about a library solution to implementing functional style pattern matching in C++. But then in the Hacker News discussion for this page, the top comment begins to talk about how the visitor pattern might be in competition with pattern matching (with the Google style guide favoring the visitor pattern over pattern matching), and then with another post linking to yet another library solution, which simply implemented multimethods for C++. And I had heard Peter Norvig make the claim that "design patterns are bug reports against your programming language", arguing that the visitor pattern is redundant in languages with multimethods (he cited Common Lisp). And then there are people in the HN discussion suggesting that Scala represents the better attempt to reconcile object-oriented and functional styles. This wouldn't be incredibly surprising given the relative ages of either language (although perhaps at this point one might hesitate to call C++ a single "language"), but at the very least I am left with the feeling that C++ has got to have a very complicated compiler.

  28. #508
    Admiral of Awesome
    Posts
    16,862
    Quote Originally Posted by Reverend Jones View Post
    Interesting. Thanks.

    I suppose I've absorbed some outmoded or superficial ideas that conflate some historically prominant style of programming associated with different languages with assumptions of what the languages actually offer. If I can program in C++ just like I might want to in ML, then I should probably see for myself before opening my mouth.
    Quote Originally Posted by Reverend Jones View Post
    Well, maybe not just not like ML, if we are using objects to organize the program and don't have something like pattern matching. But, comparable in terms of safety, efficiency, and modularity. And apparently more flexible since as you showed we can choose which pieces we might want to keep pure.
    Yeah, C++ doesnít have pattern matching yet. Between function overloads and template partial specialization you can almost do it, but only for types - not values. And itís extremely verbose.

    I know Bjarne Stroustrup wants pattern matching, so itís probably just a matter of time.

  29. #509
    Of course all of that (my last post) is rather silly. At the end of the day these are not important issues, so thanks Jon for pointing out the important engineering concerns at stake here and directing me away from the superficial.

    Yes, C++ might be complicated, but who cares? Maybe compiler writers?

  30. #510
    Admiral of Awesome
    Posts
    16,862
    Quote Originally Posted by Reverend Jones View Post
    I am left with the feeling that C++ has got to have a very complicated compiler.
    Good guess.

  31. #511
    Admiral of Awesome
    Posts
    16,862
    Quote Originally Posted by Reverend Jones View Post
    Of course all of that (my last post) is rather silly. At the end of the day these are not important issues, so thanks Jon for pointing out the important engineering concerns at stake here and directing me away from the superficial.

    Yes, C++ might be complicated, but who cares? Maybe compiler writers?
    C++ has a lot of footguns, most of which are mandated by the standard.

    E.g. this blog post that popped up on HN last week: http://ithare.com/c-thoughts-on-deal...gned-mismatch/

    Iíve certainly pined for a C++ variant that removed at least some of the complexity - things like implicit casts for integral types, and integer promotion. But only because that portion of the complexity is expressly unsafe, and originates in C.

    I actually use most of the things that make C++ a genuinely complicated language to learn and implement. And I want even more complexity! I just want C++ to be complicated in a good way, not complicated the way it currently is, a minefield of undefined behaviour.

  32. #512
    Wasn't Java supposed to do that? "C++ with bug fixes". But I guess I'm out of date, now I should be suggesting something like D or Rust or ...

  33. #513
    Also, implicitly companies seem to do something similar to what you want, though crudely, by simply writing style guides which prohibit a bunch of idioms, leaving you with a subset of the language that is supposed to work fairly well. Of course you're not going to easily fix something as fundamental as integer casts with just guidelines. And actually it seems we are looking at two written documents (loosely speaking) in conflict with one another: style guidelines versus the standard. Hence the need for a "variant", I see now.

  34. #514
    But OK now, I should stop before talking too much about something that will probably drive you into ranting about things that already drive you crazy by necessarily encountering them at work!

  35. #515
    Of course, one can go the reverse direction, and start with C, and simply beat it to death with a bunch of tools ensuring safety. For example, I was just reading about how SQLite does this. But of course C has you completely restricted to a simple idiom that C++ programmers probably liken to stone age programming, perhaps with the exception of using pointers in a fancy enough way to mimic C++ abstractions (though I imagine this would get verbose very fast). One could of course take a verbose language like C, Java, or Go that lacks macros, and, say, bolt them on top. But of course this doesn't at all help the compiler use those features in any way to compete with C++ performance, so there's not too much point beyond human comprehension.

    Edit: now that I think about it, notwithstanding the relative unsuitability of the JVM for things like games, I would suspect Scala to be up your alley.
    Last edited by Reverend Jones; 03-14-2018 at 06:47 PM.

  36. #516
    Actually, what about Julia?

    Edit: reading a bit more about this language, it is rather different from C++, and seems to be motivated more by languages with dynamic dispatch, like Lisp or Dylan. People seem to want to use it to call existing C code, though, but perhaps mostly in batch environments. Maybe Rust or D (though for some reason this language seems to have been on the market for too long) would be more appropriate contenders for a C++ successor.
    Last edited by Reverend Jones; 03-14-2018 at 05:08 PM.

  37. #517
    ^^vv<><>BASTART
    Posts
    7,768
    Quote Originally Posted by Reverend Jones View Post
    Thinking back on this thread, I think I can conclude that until something comes along that mathematicians (classical or otherwise) can reliably rely upon to "run into" as they "stress test" their theorems: that is to say, look far and wide for contradictions--even if that means constructing "ugly" things like axiomatic set theory, so that it's turtles all the way down (set theoretic / logical "reductionism")--then classical foundations based on things like infinite sets and classical logic will be with us, since we don't have powerful enough tools to replace them, even if the leading candidates promise to be more reliable or precise.
    Well, that is why mathematical talks, conferences and peer review exist. Expecting completely reliable proof checking is expecting any human product to be reliable, I don't think it's a genuine possibility for a human product to be infallible, so I don't place it high on my list of worries.

    The suspicion I have with replacing "classical foundations" is that the new foundations will almost surely run into unforeseen issues. Set theory wasn't created as a "good enough" substitute, people thought it would be the one perfect theory that could unite all of mathematics. Once people all began using the language, its limitations became apparent, and I have no reason to believe any other foundation won't run into limitations once people really begin working at it. Two foundations might have different weaknesses and strengths, so having different ways to formulate makes math better by allowing different strategies and languages of proof, which is good, but I'd like to be rid of the notion that we'll ever have a perfect formulation of the foundations of mathematics.

    Quote Originally Posted by Reverend Jones View Post
    I suppose the real reason for the (perhaps seen as undue) emphasis on things like intuitionist logic / categories / types by folks such as myself, who prefer a more "combinatorial" setting to interpret and apply the content of the theorems we may be interested in (whether they originated in real analysis or something with far fewer intrinsically non-constructive pieces), is that for results that intrinsically depend on classical foundations, and for which we find ourselves unable to capture some structure of it in a language like category theory, then, well, we really don't know what to do with it, and can probably safely ignore it. Rota would have said that we have "different bottom lines", and to have philosophical arguments in which one side is castigated as "finitist", and the other as "reductionist", well, we would simply be arguing past one another.
    Part of why I feel resistant to finitist arguments is I feel the skepticism they show towards infinite sets selective. Sure, it's not obvious, if you presuppose a notion of finite sets, why infinite sets exist, but you've already made assumptions about the existence of finite sets. As long as we're being skeptics, why not be skeptical towards the entire notion of set theory altogether? There's no limit of arguments against logical reasoning, some of which go back very far and are still very prescient. Assuming the existence of infinite sets does give rise to complications, but I feel any theory powerful enough to say interesting things must, as almost a necessity of nature, give rise to weird things.

    Quote Originally Posted by Reverend Jones View Post
    On some level, the combinatorialist has the secret thesis that all mathematics, while possibly arising in some existing theory with its own structure and language, can always be given a combinatorial interpretation. For example, while matrix algebra arose historically in this or that context, bringing along some canonical interpretation, combinatorialists still have uses for it that might benefit from a more flexible, neutral (though perhaps more complicated) interpretation.
    That much of mathematics can be done combinatorially is well known in mathematics, for instance the theory of simplicial sets gives rise to a theory of algebraic topology which is combinatorial, and algebraic topology is definitely not hostile to category theory.

  38. #518
    Quote Originally Posted by Reid View Post
    Well, that is why mathematical talks, conferences and peer review exist. Expecting completely reliable proof checking is expecting any human product to be reliable, I don't think it's a genuine possibility for a human product to be infallible, so I don't place it high on my list of worries.

    The suspicion I have with replacing "classical foundations" is that the new foundations will almost surely run into unforeseen issues. Set theory wasn't created as a "good enough" substitute, people thought it would be the one perfect theory that could unite all of mathematics. Once people all began using the language, its limitations became apparent, and I have no reason to believe any other foundation won't run into limitations once people really begin working at it. Two foundations might have different weaknesses and strengths, so having different ways to formulate makes math better by allowing different strategies and languages of proof, which is good, but I'd like to be rid of the notion that we'll ever have a perfect formulation of the foundations of mathematics.
    There's a field that studies this: it's called computer science.

    If alternatives to mathematical foundations as they exist today are founded in the explicit study of the very limitation of doing so, and only purports to make incremental progress, I am not sure how one can expect to do much better. There's a reason why these "alternative" foundations are full of stuff like types, which are heavily used in CS.

    Of course it may never work, or it may only work partially. But there's no reason to be skeptical of a field that is inherently cognizant of its own limitations by nature.

    Finitism (your term) is not really relevant here, except maybe that you don't like that some people may be motivated to study such foundations by it.
    Last edited by Reverend Jones; 03-14-2018 at 07:05 PM.

  39. #519
    ^^vv<><>BASTART
    Posts
    7,768
    I guess the real point is: yes, there are reasons to be skeptical of some things in mathematics. There are reasons to be skeptical of anything, philosophical inquiry into any subject will show that virtually nothing we believe in has any sort of absolute, transcendental foundation, that nobody can dispute. All beliefs about anything have good counter-arguments. At the same time, it's completely untenable to be an actual skeptic, in fact, I think the notion that anyone is fully skeptical to be an absurd idea, you simply could not function as a human being without operating on quite a few base beliefs and truths about the world (that, say, opening your refrigerator is not about to kill you, or that the sun will come up tomorrow, we all act practically on belief). We're all basically walking around having blind faith about most things we believe most of the time.

    For some reason, some people get really particular and really antsy about one belief in mathematics. I just don't see the point in developing such a deep obsession about one particular belief. Past a certain point, such deep skepticism comes across more pathological and irrational than accepting we can't justify any belief fully.
    Last edited by Reid; 03-14-2018 at 07:07 PM.

  40. #520
    ^^vv<><>BASTART
    Posts
    7,768
    Quote Originally Posted by Reverend Jones View Post
    There's a field that studies this: it's called computer science.

    If alternatives to mathematical foundations as they exist today are founded in the explicit study of the very limitation of doing so, and only purports to make incremental progress, I am not sure how one can expect to do much better. There's a reason why these "alternative" foundations are full of stuff like types, which are heavily used in CS.
    Computer science will not be able to overcome all of the possible problems in mathematics. It has strengths and can lead us in new directions, but it absolutely cannot solve every issue. Such an idea is a pipe dream.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •