Does LightFM have a GPU-based implementation?

No, there is no option to run training or inference on the GPU with LightFM. There are currently no plans to change this. See https://github.com/lyst/lightfm/issues/429

What are the “learning to rank” and “hybrid” aspects of LightFM and how do they relate?

Learning to rank and hybrid recommendation models are independent concepts. Learning to rank just means that you are optimizing a ranking loss such as WARP or BPR. Hybrid refers to the fact that you incorporate user or item meta-data as additional features. See: https://github.com/lyst/lightfm/issues/442

Adding user/item features makes my model perform worse than without features, what can I do?

That’s not unusual and might have various reasons. For one, make sure you don’t drop per-user/item features, see the notes in LightFM. If that doesn’t help, your features might be simply uninformative and worsen the signal to noise ratio. You can experiment with different features and try discretization strategies for continuous features. More strategies and ideas can be found here:

How can I re-train my model on partial data and/or new users (user cold-start)?

This depends a lot on your specific use case. Here are some helpful discussions: