• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: October 15th, 2023

help-circle
  • I have a Model 3 at the moment. I’ve had it for almost 5 years and it’s generally been great - cheap to run, quiet and comfortable on longer trips but still fun to drive on back roads.

    Recently it had its first major breakdown, and although Tesla service did manage to take care of it, it’s got me browsing for new EVs - but now, buying a Tesla is not the foregone conclusion it once might have been.

    First, they have been making some truly stupid design choices in their latest facelifts (deleting the indicator stalks and gear selector).

    Second, their CEO has now gone completely mask-off fascist.

    Third - after a few years for the competition to catch up, we now have genuine alternatives from other marques which are just as good if not better EVs than Tesla’s offerings.

    I think my next car will likely be a Polestar 2.









  • Not exactly crazy but just mysterious…this was at a software company I worked at many years ago. It was one of the developers in the team adjacent to ours who I worked with occasionally - nice enough person, really friendly and helpful, everyone seemed to get on with them really well and generally seemed like a pretty competent developer. Nothing to suggest any kind of gross misconduct was happening.

    Anyway, we all went off to get lunch one day and came back to an email that this person no longer worked at the company, effective immediately. Never saw them again.

    No idea what went down - but the culture at that place actually became pretty toxic after a while, which led to a few people (including me) quitting - so maybe they dodged a bullet.


  • I’ve tried Copilot and to be honest, most of the time it’s a coin toss, even for short snippets. In one scenario it might try to autocomplete a unit test I’m writing and get it pretty much spot on, but it’s also equally likely to spit out complete garbage that won’t even compile, never mind being semantically correct.

    To have any chance of producing decent output, even for quite simple tasks, you will need to give an LLM an extremely specific prompt, detailing the precise behaviour you want and what the code should do in each scenario, including failure cases (hmm…there used to be a term for this…)

    Even then, there are no guarantees it won’t just spit out hallucinated nonsense. And for larger, enterprise scale applications? Forget it.