I didn't do much C# after VS2005, but then the (Winforms) apps were a bit laggy. At that time I was preparing to go to ASP.NET for webapps though, but my career went a different, more embedded way.
I also haven't been using .Net since 2012ish so much, but I came into contact with some applications developed in .Net by people I know, and, at least WPF applications run generally pretty smooth. That said, from what I have heard, and what was already the sentiment when I used it back then, you should check that you invoke the garbage collector (e.g. by calling dispose) as often as you can manually in situations where you know you can wait a little.
Also from what I know, the garbage collection has become much faster in the past 15 years or so, Microsoft invested heavily in GC research for exactly that matter. The 10-20ms it is today is actually quite impressive, and wasn't always the case, especially in the early days of .Net like in the 00s
I don't swift that well, but are C# and Java really that humongously higher level than say C++ or Delphi? Sure they cut some lowlevel stuff, and have GC, but conceptually are they really?
Well these languages are completely build upon the concepts of OOP (for better or for worse), and abstract memory as such away to a certain degree. So in C++, you always need to know your memory, if it is stack or heap or whatever. In Delphi it abstracts a little bit with classes, but you still need to know to a certain degree what happens underneath, and you have to juggle pointers in some situations.
In C# you only have your Objects wich are either copy by reference (classes) or copy by value (structs) and in Java you do just have copy by reference and nothing else. Stack, heap, pointer, etc. those words are meaningless in these languages (funnily enough, through the way how these generational garbage collectors work, the heap is basically also a stack)
Also, while Java is still very simplistic (which is one thing I actually quite like about Java), there has been a lot of development in C# recently, such that I sometimes, when seing modern C# code, I do not recognize some features from my time of Using C# 10 years or so ago. So there are a lot of new things. And it is really weird, when you consider yourself knowing a language at least rudimentary, looking at a piece of code and having no idea what it actually does.
I never understood what people see in Python. At least not for sizeable apps. At first I thought it would go away, like Perl, but it seems to constantly reinvent itself. What makes it worth dealing with bad performance scalability, major version problems, vast, but low quality packages and their security problems, deployment. (ok, meanwhile those Python compilers are less obscure than say 8 years ago, but still. I btw think they are more about deployment than performance)
I have a hate-love relationship with python. I really like the incorporation of functional pradigms, as I love functional programming. Haskell is my favourite language I will never use (pure functional languages are IMHO too cumbersome to be usefull, as most parts of programming is doing I/O or calling APIs, in which cases your functional language needs to basically emulate imperative structures, which is a real pain in the neck), so seeing a lot of these features in python is really nice.
For example, one thing I really love about python is the ability to use higher order functions like in Haskell. At least with some hacks, but it works, see the following structure:
from typing import Any, Callable
# Fixes the first argument of a function to a specified value
def fix_func(func: Callable, val: Any) -> Callable:
return lambda *arg_list, **arg_dict: func(val, *arg_list, **arg_dict)
# example function
def plus(a: int, b: int) -> int:
return a + b
# plus5: fixes the first argument of plus to 5, resulting in a function that always adds 5 to any number
plus5: Callable[[int], int] = fix_func(plus, 5)
# nine: fixes the argument of plus 5 to 4, i.e. the function plus with first argument fixed to 5 and second fixed to 4, resulting in a function always returning 9
nine: Callable[[], int] = fix_func(plus5, 4)
# Test
print(plus5(10))
print(nine())
It might seem like some gimmick at first, but once you have used it a few times, you will never miss it. Of course in a real functional language like Haskell it is much easier:
-- operators in haskell can be used as functions by putting them in brackets
plus5 = (+) 5
-- haskell does not differentiate between values and 0-ary functions
nine = plus5 4
But thats a different story
Another thing I really like about python are named parameters:
def plus(a: int, b: int) -> int
return a + b
plus(a=5, b=4)
# with default values
def plus(a: int = 4, b: int = 5) -> int:
return a + b
plus(b = 11)
This firstly increases readability massively on functions with a lot of parameters to simply write down the name when calling the function, while also allowing muche more felxibility with default parameters, which in languages like pascal would require multiple overloads. Also it can help to find errors, when you use always named parameters, the linter will directly throw a message if you are missing some required parameters
That said, I had the (mis)fortune to work on and with larger projects in python, e.g. I wrote an addon for
Caldera and there are a few perks of writing servers with python, e.g. similarly to php, testing some new functionality basically just amounts to changing some lines and executing the script rather than having to recompile and/or deploy.
But my main problem is that python makes it really easy to write bad code.
As you might have seen in my examples above, I always write type annotations in python. If you do that, the language gets much much better to use by improving readability, but also allowing the tooling (like vscode) to give you correct suggestions and hints.
That said, most people I have worked with (in fact all whom I haven't "trained" to also use type annotations) do not use them, making interfacing with such code, especially in larger projects a real pain.
Especially if you have libraries that do not define object structures statically (i.e. in the constructor), but generate your objects dynamically. For example in the libclang, they build polymorphism not by class inheritance, but by stuffing different attributes into the same class, meaning when traversing a C++ AST, you always have a Node object at hand, but this node object might have a member field called "result_type" if it's a function, or not if it's a class definition or something similar.
This makes it really confusing to find out what fields you actually have and can access (basically when I used libclang, I used the debugger inspector to find out what I could actually access, because there was no documentation, and the code was so distributed, that it would have taken ages to find it from source).
So while I do like python in principle, working with it, especially when interfacing foreign code (i.e. pretty much every time) can be a real pain.