I'm looking for some way of using AI chat without sending all my data up to some server in the cloud, and hammer ai seems like a sure winner as you have it running local in the browser. My main concern is the hardware performance, I'm using a standard 16GB RAM laptop, and my concerns were that using LLMs in the local run on the machine would cause a huge lag or overheat the system. Has someone here tried his or her roleplay or productivity templates? I'm also curious to know if it supports the importing custom models from who-knows-where like Hugging Face or if you're critiqued to the ones created into the software itself. Any advice on achieving the highest response times would be huge some help!