There are so many definitions of OOP out there, varying between different books, documentation and articles.
What really defines OOP?
There are so many definitions of OOP out there, varying between different books, documentation and articles.
What really defines OOP?
You’re getting a lot of conceptual definitions, but mechanically, it’s just:
At minimum, that’s it. All the other things (encapsulation, message passing, inheritance, etc) are for solidifying that concept further or for extending the paradigm with features.
For example, you can express OOP semantics without OOP syntax:
foo_dict.add(key, val) # OOP syntax dict_add(foo_dict, key, val) # OOP semantics
Importantly, that’s “together at runtime”, not in terms of code organization. One of the important things about an object is that it has dynamic dispatch. Your object is a pointer both to the data itself and to the implementation that works on that data.
There’s a similar idea that’s a bit different that you see in Haskell, Scala, and Rust - what Haskell calls type classes. Rust gives it a veneer of OO syntax, but the semantics themselves are interestingly different.
In particular, the key of type classes is keeping data and behavior separate. The language itself is responsible for automagically passing in the behavior.
So in Scala, you could do something like
Or
Given a Num typeclass that encapsulates numeric operations. There’s a few important differences:
All of the items of that list have to be the same type of number - they’re all Ints or all Doubles or something
It’s a list of primitive numbers and the implementation is kept separate - no need for boxing and unboxing.
Even if that list is empty, you still have access to the implementation, so you can return a type-appropriate zero value
Generic types can conditionally implement a typeclass. For example, you can make an Eq instance for List[A] if A has an Eq instance. So you can compare List[Int] for equality, but not List[Int => Int].