Some well considered advice: drop TensorFlow and go with PyTorch. Spend your effort where it will make a difference: on deep learning, rather than on fighting with the framework.
People just keep using TF because it was the first full-fledged Python framework for this, not because it has any technical merit anymore. In PyTorch you will make twice as much progress in half the time.
TF does have stuff which is still lacking in pytorch: for example complex numbers support, better sparse matrix support. The new distributed api is much more functional than pytorch one, allowing different levels of control, and not only high level. Surely there are more examples outside of my usecases. So yeah, pytorch is default, but if I need feature which is lacking there, I switch back to tf
Im just starting with TF and found keras quite useful to begin with. TF itself feels like assembly language and I'm sure it will evolve to something higher level such as keras in the near future.
I have never used PyTorch though, will check it out. I would appreciate your thoughts on tf.keras vs pytorch.
Not flexible enough for research (you still have to deal with the horrible TensorFlow API that's underneath at some point), but good if you just want to implement or use something that already exists. Not good for models in which graph changes dynamically. Actually "flexible" is probably not the right word. You can make it do what you want, but you will spend a lot longer and the result will likely be unreadable.
But a better question is, why bother with Keras at all, if PyTorch gives you a higher performance, more flexible, more "Pythonic" solution? And yes, did I mention performance? PyTorch blows the socks off anything TF based on most training and inference tasks.
Thanks.
I admit what made me chose TF is the support from Google which guarantees somehow that the tool will stick around for some time. And the numbers of contributors to the library. Where do you see PyTorch in the near future?
I see it overtaking TF as the framework of choice for researchers and practitioners alike. I also see TF moving closer and closer API-wise to PyTorch's superior API. This is already starting in 2.0 with imperative mode, but due to the amount of legacy code already written for earlier versions, they have a massive brake on their efforts, something PyTorch (which got it more or less "right" from the beginning) does not. Finally, Google is working on Cloud TPU support for PyTorch as we speak.
Thanks. I always forgot about caffe2 when talking about pytorch. I couldn't anything for JavaScript and mobile seems not as good supported as TF but for sure they will improve.
Have you actually used TFLite? It's slow as molasses. Deploying on mobile is a bit of a shitshow across the board right now, from what I undertand. Not all models are supported out of the box (especially with ONNX), and the ones that are supported aren't guaranteed to have acceptable performance with off-the-shelf frameworks. Documentation is very sparse as well, especially for the quantized stuff.
People just keep using TF because it was the first full-fledged Python framework for this, not because it has any technical merit anymore. In PyTorch you will make twice as much progress in half the time.