But how long will this trend last? In the history of IT, certain trends (fashions) came and went, and then returned after many years. So it was with AI. It's probably 3rd (or 4th) now. It was the same with the so-called "expert systems". They were supposed to solve all problems, many years ago it was predicted that in the future they would solve practically all technical problems. This did not happen (although many of them are useful in narrow applications). Or maybe in a few years there will be a turn towards simplifying programming languages and at the same time pressure to transfer advanced algorithms to libraries? Maybe V, Zig or Carbon are the harbingers of these changes?
I've looked at language design alot over the past few years, and I try learn at least one new language per year. And from my very subjective observations, yes there are different waves. The first real language paradigm boom was in the 70s and 80s, where basically most of the paradigms we see today where invented. E.g. Generics is considered a new feature in most languages, but it was first introduced by ML in 73. There were a lot of great ideas, but also alot that has gone no where, or good on paper but not useful for real software. Like Smalltalk or Forth are really amazing languages that give you a real "awakening" moment in that you will never look at programming the same after learning them, but they aren't really that useful for building real software.
Then there was OOP (imperative OOP that is, while SmallTalk introduced OOP, it was fully expression based and more akin to a functional language, this never cought on). And in the 90s it was just so incredible useful, especially when Java came around with all it's tooling, that basically everyone just stopped experimenting and everyone went on the OOP train. Then after around 20 years of OOP being the dominant paradigm that people started noticing the "cracks" and problems with it, like the problem with everything being nullable, to many levels of indirection, to much implicit state, etc.
And this is how today everyone now looks again around and experiments with "new" or "forgotten" paradigms on how to do things better, and which of those features would work well together with the existing OOP paradigms.
This will not last forever, probably in 10 years or so the experimentation again will have settled and we have some new set of dominant paradigms. But I must say, I really like looking at where some languages are going. Personally I really like Swift, I don't even own any apple products anymore, but just from the language features it's a very nice mix of classical imperative features and some more functional features. I personally do not like the prototype based languages like JavaScript and Python, while they are quite powerful, their reliance on runtime polymorphism results in alot of bugs that a more compiletime defined language would have avoided.
It cannot track uninitialized variables across nested functions, so splitting them causes all kinds of warnings
I personally like nested functions because their visibility is encapsulated. But I think that relying on shared state (i.e. the variables of the parent scope) should be minimized as much as possible. I also try to write my functions as "pure" as possible (i.e. that they solely rely on inputs as parameters and only output in form of return value or out parameters).
Thats may not result in the most efficient code, but it often results in more readable and understandable code.
And each function gets its own implicit finally block, so it becomes very slow when there are too many. Although one could split it into one function having no managed variables and one function having all the managed variables
I personally don't care about performance until I run into problems. Some years ago I've worked on a C++ Project which required every bit of performance possible, with runtimes of days on 15kiloeuro machines, every bit of performance increase was valuable. We even did things like using the upper 16 bits of a 64 bit pointer to store additional data because this reduced copy operations.
After this I developed what I like to call "Performance PTSD", where I would look at a piece of code and see all the unessecary operations, like copies where references should be used, inefficient branches instead of arithmetic operations, etc.
It was so bad that I couldn't write good software anymore, in the sense that all my code was really complicated to be as efficient as possible (like I used raw pointer access to circumvent managed code, used move instead of assignments, using generics instead of virtual methods, etc.). I also just did not finish anything because I wasted so much time trying to write the most optimal code.
It took me quite some time, and another person literally measuring my code and showing me that I literally optimized nanoseconds for code that would be used right between some "Write" calls which would take multiple milliseconds, to realize that everything I did there did not matter in the sligthest, and that I just wrote bad code for no reason what so ever.
Since then I don't care about performance anymore, if I run into performance problems I use a profiler to find the bottleneck and fix this specifically. And whenever I find myself writing very complicated code because it is more efficient, I stop myself and just write the most simplest form and I will make it more performant if I need to.
Like I could use a geometrically growing GetMem where laziely I call "initialize" on each element on premise to collect data into an array. But if it's just a few values doing this:
for value in GenerateValues do
Result += [Value];
is much easier to understand and write.
That said a bit of a difference is when I write libraries, because there I want to not be artifically slowing things more down than necessary.
As a side note: While I don't consider performance to be necessarily a good guiding principle for writing code, I think that programming languages could still be designed to provide the tools to do so and to be easiely optimizable.
One of my favourite examples here is Haskell. It is a functional language that is typically interpreted, and Haskell code is usually orders of magnitude slower than (algorithmically) similar code in C or Pascal. But by enabling compilation and optimization (with or without LLVM) instead of interpretation, and also using more optimized types and non lazy evaluated values, you can easiely be as fast as the other languages. Sure it requires to write some more "performance aware" code, but it is still distinctly Haskell and does feel native to the ideas of that language. You can write fast code if you need it, but can have slower code but with more flexibility when you don't