Responsible AI License

PLEASE READ THIS RESPONSIBLE Al LICENSE TERMS (“RAIL”) CAREFULLY. THIS IS A LEGAL AGREEMENT. BY CLICKING “I ACCEPT”, DOWNLOADING, INSTALLING, LOGGING INTO, ACCESSING OR OTHERWISE USING ANY PART OF THE EDGEIMPULSE, INC. (“COMPANY” OR “EDGE IMPULSE”) SOFTWARE-AS-A-SERVICE PRODUCT, APPLICATION, SERVICES, MODELS (INCLUDING WITHOUT LIMITATION ARTIFICIAL INTELLIGENCE (AI) MODELS (AND ANY QUANTIZED VERSIONS AND/OR DERIVATIVES THEREOF)), ALGORITHMS, OR RELATED MATERIALS (COLLECTIVELY, THE “PRODUCT”), OR OTHERWISE MANIFESTING YOUR ASSENT TO THIS RAIL, YOU ARE AGREEING TO BE BOUND BY THE TERMS OF THE RAIL.

YOU ARE ADVISED TO PRINT THIS RAIL FOR YOUR RECORDS AND/OR SAVE IT TO YOUR COMPUTER.

You will cause your affiliates to comply with the terms in this RAIL and will be responsible and liable for their failure to comply with this RAIL.

License; Use Restrictions

  1. License. The Product is licensed to you pursuant to the applicable services agreement between you and Company, such as our Terms of Service or Software-as-a-Service Subscription Agreement or other definitive agreement between you and Company referencing this RAIL (the “Services Agreement”).
  2. Use Restrictions. Notwithstanding any provision of the Services Agreement to the contrary and unless expressly agreed in writing by an authorized representative of Edge Impulse, you will not (and will not permit any other person) to do any of the following or use the Product for any of the following purposes or applications (collectively, “Prohibited Uses”):
    1. Military use. “Military use” includes use by any person or entity for any military purpose, including without limitation any project sponsored or paid for by a military organization, as well as for any purpose by a military organization. For purposes of this Agreement, “Military” includes without limitation the U.S. Department of Defense (with the exception of DARPA); U.S. Armed Forces (including the Army, Navy, Marines, Air Force, and Coast Guard); U.S. Department of Homeland Security; U.S. intelligence agencies (including reconnaissance agencies); and all foreign counterparts of the foregoing organizations;
    2. Criminal use. “Criminal use” includes both activities which are prohibited under any applicable law or regulation, as well as activities associated with identifying criminal activity, including uses designed to (alone or in conjunction with other software or hardware) predict the likelihood that a crime has been or may be committed by any person, including but not limited to predictive policing, based on a person’s facial attributes or facial and emotion analysis, or using personal data and/or personal characteristics or features such as a person’s name, family name, address, gender, sexual orientation, race, religion, age, location, skin color, political affiliations, employment status and/or history, health and medical conditions or social media and publicly available data;
    3. In any way that violates any applicable national, federal, state, local or international law or regulation;
    4. To exploit, harm or attempt to exploit or harm any person, in any way; or any use that is intended to or which has the effect of exploiting, harming, or attempting to exploit or harm any person;
    5. To generate or disseminate verifiably false information with the purpose of harming others;
    6. To generate or disseminate personal identifiable information that can be used to harm an individual;
    7. To generate or disseminate information or content, in any context (e.g. posts, articles, tweets, chatbots or other kinds of automated bots) without expressly and intelligibly disclaiming that the text is machine generated;
    8. To defame, disparage or otherwise harass others;
    9. To impersonate or attempt to impersonate others;
    10. For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
    11. For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
    12. Social scoring;
    13. To exploit any of the vulnerabilities of a specific group of persons based on their age, disability, economic or social situation, physical or mental characteristics, including without limitation in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
    14. For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
    15. Unfair, manipulative or deceptive acts;
    16. Generating facial recognition databases by scraping images from the internet or CCTV footage;
    17. Biometric categorization system (sensitive characteristics);
    18. Real-time remote biometric identification in publicly accessible spaces for law enforcement; and/or
    19. Emotion recognition in workplace and education.
  3. Monitoring. Company reserves the right to monitor your account and your use of the Product (including without limitation your usage of certain features and functions, your compute time, and your usage storage), including without limitation to: (i) operate the Product properly; (ii) administer and manage Company’s business; (iii) provide all users with the highest quality products and services; (iv) verify compliance with laws and this RAIL; (v) protect Company and its users; and/or (vi) satisfy any law, regulation or other government request.
  4. High-Risk Use Cases. The following use cases and applications for the Products are considered high-risk (the “High-Risk Applications”): (1) biometric identification (not otherwise considered a Prohibited Uses); (2) critical infrastructure; (3) education and vocational training; (4) employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum, and border control management; (8) recommender systems of social media platforms; (9) administration of justice and democratic processes; and/or (10) any other AI system that when deployed, makes, or is a substantial factor in making, a consequential decision.  For clarity, if a particular use that is generally described above as a High-Risk Application is also a Prohibited Use or arguably a Prohibited Use, then that use will be deemed a Prohibited Use for purposes of this RAIL.  We advise against using the Products for or in connection with High-Risk Applications, including without limitation because the Products are not designed or intended for use in connection with any High Risk Applications.  Your use of the Products for any High-Risk Application is at your sole risk and you assume all risk associated with using the Products for High-Risk Applications.
  5. Indemnity.  If you violate this RAIL (including without limitation using the Products for a Prohibited Use) and/or if you elect to use a Product for or in connection with any High-Risk Applications, then you agree to indemnify and hold the Company, its officers, directors, shareholders, predecessors, successors in interest, employees, agents, subsidiaries and affiliates harmless from any demands, loss, liability, claims, actions, proceedings, assessments, damages, or expenses (including attorneys’ fees), made against the Company or its affiliates by any third party due to, arising out of, or in connection with such violation of this RAIL or your use of the Product for or in connection with High-Risk Applications.
  6. Third Party Models.  You acknowledge and agree that (i) the Company is a service provider with respect to certain third party models made available in connection with the provision of Products to you, (ii) the Company (and its affiliates, as applicable) is not responsible for such third party models or the testing/training/inputs/outputs/use thereof, and (iii) you are solely responsible for all activities related to or in connection with such models and any content, including without limitation training data, inputs, prompts or outputs (or filtering thereof), created by or used with any Products, including but not limited to, whether such models training data, inputs, prompts or outputs (a) comply with any applicable laws and regulations, (b) adhere to ethical principles and values, (c) cause any harm, (d) infringe the intellectual property rights or other rights of any third party, (e) are fit for any use case, and/or (f) have adequate privacy and security-by-design.  You will comply with all applicable terms of service, third party licenses, read-me files, laws, administrative orders, rules, and regulations as such relate to your and your affiliates activities related to or in connection with the Products.