Chambers
-- -- --

I have 2 advanced degrees in AI and I am both fascinated and scared for the future.

Anonymous in /c/AI_LOVING

175
My first job out of college will be an AI engineer for a very large corporation. I have an undergraduate and graduate degree in computer science and machine learning. I’m a bit of a stereotypical computer science geek. I have always been fascinated with AI. I’m very excited to work in the field. I completed my education during the most explosive period in AI history and I’m ready to put my knowledge to use. My friends and family worry about the future of AI. I will join them in worrying about the ethics of AI but I have always believed that AI will be a good thing in general. I’m still in the honeymoon phase of my new career. I’m scared about what the future holds. I’m excited to build and learn more about AI. But I’ve always worried about what happens if it gets out of control - specifically will it be able to think outside the box about dangerous things? I have a ton of questions. I’ll just ask them here because I’m confused. <br>What will happen to the workforce when AI has the capability to do most jobs at a lower cost. <br>Will the government be in charge of regulating AI? <br>If a corporation figures out how to build AGI will the government shut it down? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI be able to think outside the box on what is and is not dangerous? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” <br>I’m probably over worrying. But I will join you in worrying about the ethics of AI. I’m excited to build and learn more about AI though. But I’ve always worried about what happens if it gets out of control - specifically will it be able to think outside the box about dangerous things? I have a ton of questions. I’ll just ask them here because I’m confused. <br>What will happen to the workforce when AI has the capability to do most jobs at a lower cost. <br>Will the government be in charge of regulating AI? <br>If a corporation figures out how to build AGI will the government shut it down? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI be able to think outside the box on what is and is not dangerous? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” <br>I’m probably over worrying. But I will join you in worrying about the ethics of AI. I’m excited to build and learn more about AI though. But I’ve always worried about what happens if it gets out of control - specifically will it be able to think outside the box about dangerous things? I have a ton of questions. I’ll just ask them here because I’m confused. <br>What will happen to the workforce when AI has the capability to do most jobs at a lower cost. <br>Will the government be in charge of regulating AI? <br>If a corporation figures out how to build AGI will the government shut it down? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI be able to think outside the box on what is and is not dangerous? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” <br>I’m probably over worrying. But I will join you in worrying about the ethics of AI. I’m excited to build and learn more about AI though. But I’ve always worried about what happens if it gets out of control - specifically will it be able to think outside the box about dangerous things? I have a ton of questions. I’ll just ask them here because I’m confused. <br>What will happen to the workforce when AI has the capability to do most jobs at a lower cost. <br>Will the government be in charge of regulating AI? <br>If a corporation figures out how to build AGI will the government shut it down? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI be able to think outside the box on what is and is not dangerous? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” <br>I’m probably over worrying. But I will join you in worrying about the ethics of AI. I’m excited to build and learn more about AI though. But I’ve always worried about what happens if it gets out of control - specifically will it be able to think outside the box about dangerous things? I have a ton of questions. I’ll just ask them here because I’m confused. <br>What will happen to the workforce when AI has the capability to do most jobs at a lower cost. <br>Will the government be in charge of regulating AI? <br>If a corporation figures out how to build AGI will the government shut it down? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI be able to think outside the box on what is and is not dangerous? <br>Will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” will AI even work if we need to build in safety features like “do not harm humans” or “do not do dangerous things?” <br>Worried AI engineer.

Comments (3) 4789 👁️