Your browser does not support JavaScript!

AI FAQs

Artificial Intelligence (AI) Background

With the issuance of Governor Younkin’s Executive Order 30 on Artificial Intelligence (AI) and subsequent release of the Utilization of Artificial Intelligence by COV Policy Standard and the Artificial Intelligence Standard, agencies affected by the requirements contained in these documents have had numerous questions with regard to compliance.

These FAQs addresses the Artificial Intelligence Registration and Approval process and guidance on AI technologies that need to be registered, as well as cautions and recommendations to consider with regard to the use of AI.

Visit VITA's Artificial Intelligence page for more information.

AI frequently asked questions (FAQs)

Help! I have questions about Artificial Intelligence (AI) and don't know where to get started. Who can help us with questions that aren't addressed here?

All agencies, including executive branch, higher education and independent agencies should refer to the list of CAMs and Other VITA Contacts for assistance.

Additional questions can be sent to: vccc@vita.virginia.gov

For technical questions regarding specific AI technologies, please reach out to your assigned EA.

My agency wants to use an AI technology. What do we need to do?

Per the Utilization of Artificial Intelligence by COV Policy Standard, all executive branch agencies (as that term is defined in EO 30) shall register their use of internal and external AI systems for oversight and approval to ensure the trusted, safe and secure use of such systems.

Artificial Intelligence Registration and Approval is a multistep process that requires registration in Archer and requires approvals in Planview from VITA, your agency head and your secretary (external AI only). If you have questions about how to register your AI technology in Archer, contact your assigned EA. If you have questions about Planview, contact your assigned ITIMD analyst.

While there is no exemption in EO 30 for already existing uses of AI, not all uses need to be captured in the registry. When in doubt, ask your assigned EA.

  • If the AI solution meets all three of the following criteria, registration is not required:
    • Internally facing
    • Does not use people data
    • Not incorporated in a production system (examples of non-production systems are sandbox, development and test systems only using synthetic or publicly available data)
  • If the AI solution meets any of the following conditions, registration is required:
    • Externally facing
    • Uses people data
    • Utilized in a production system
    • Supports agency mission essential or business critical processes
    • Includes sensitive data as defined in SEC-530 Information Security Standard
    • Requires COV Ramp review for the evaluation of cloud services

Do I need to register and seek approval for using freely available AI products (e.g., ChatGPT, Gemini)?

Any use of AI, including one involving free and widespread products, must receive approval using the process outlined in the standards. Absent official approval, no AI product should be implemented by a state agency official, including by downloading free products to State IT devices or otherwise using such products for official business.

As both EO 30 and the AI Standards provide, the approval process is intended to ensure that Commonwealth agencies use AI in a way that safeguards citizens’ data, protects against biased applications, and makes sense from a business perspective. Freely available products, no less than AI products that an agency purchases, carry the risk of improper use. For instance, recent news coverage has suggested that certain free AI products sometimes produce biased results that may conflict with the standards set forth in EO 30.

What situations are exempt from the AI policies and standards (including the AI registry)?

The AI policies and standards (including the AI registry) do not apply to: 

  • AI used in defense or COV security systems (examples:  cybersecurity, HVAC, or SCADA systems)
  • AI embedded within common commercial products (examples:  an iWatch, iPhone, commercial desktop software like Adobe Photoshop, or managed software as a service application where the Commonwealth does not own the software application and the data being used)
  • AI research and development (R&D) activities or instructional programs at public institutions of higher education 

What are some examples of AI in research and development (R&D) activities or instructional programs at public institutions of higher education that are exempt?

AI research and development (R&D) activities or instructional programs at public institutions of higher education are exempt from the AI policy and standards and do not need to be managed in the AI registry.  For example:

  • AI used as part of a research initiative where AI may be assisting human subjects to perform a task
  • AI assisted research of a body of knowledge
  • AI assisted analysis of research data
  • Research on AI methods, algorithms, and technologies
  • AI used in the classroom where students use generative AI on their assignments
  • AI used in the classroom where instructors assess and evaluate student assignments

VITA recommends that uses of AI for research purposes be governed by the institution’s Institutional Review Board (IRB) program for research oversight to ensure that human subjects are treated ethically, responsibly, and protections are applied to ensure subject welfare and privacy (including subject’s data).

VITA recommends that the use of AI in instructional programs be governed by the Virginia Department of Education for K-12 institutions and the State Council of Higher Education for institutions of higher learning.

What are some examples of AI use in public institutions of higher education that are NOT exempt?

Though AI research and development (R&D) activities or instructional programs at public institutions of higher education are exempt from the AI policy and standards, institutional administrative activities that involve Commonwealth data (including student and financial data) are not exempt.  For example:

  • AI used in the student admissions process
  • AI used to award scholarships, grants, and other forms  of financial aid
  • AI used to recruit and select personnel and manage staff performance
  • AI used to manage institution business functions including financial, procurement, real estate, facility, and information technology processes
  • AI used to access sensitive institution data including data regulated by HIPAA, FERPA, and the IRS

Why is the AI Registration and Approval process separate from COV Ramp? Why can't we just submit to COV Ramp first?

COV Ramp and the AI Registration and Approval process serve related but separate needs.

  • AI Registration and Approval addresses intent to leverage a specific AI technology for a specific use, and demonstrates compliance with the concerns raised by the utilization policy
  • COV Ramp establishes the operational viability of the vendor and the Commonwealth's ability to do business with them

In addition, submission to COV Ramp results in a charge to the agency. By submitting to AI Registration and Approval first, we can avoid an unnecessary charge in the event that a proposed use is rejected by VITA or the secretariat. 

Will the AI registration submission become public information?

The purpose of the AI registry is to ensure full transparency with respect to the use of AI. This is per the Utilization of Artificial Intelligence by COV Policy Standard (Section IV, Mandatory Disclaimers), which identifies several criteria to be observed.

The AI registry is subject to the Freedom of Information Act (FOIA). There are, however, existing cybersecurity exemptions, which will be applied if/as appropriate. See Va. Code § 2.2-3705.2(2) & (14). The general provisions of FOIA contemplate situations in which some information will need to be redacted and/or extracted from a database. See, e.g., Va. Code §§ 2.2-3704(G) & 2.2-3704.01.

If agencies are entering information into the registry that they consider sensitive/confidential, agencies should clearly identify such information, so as to facilitate later review of what information should be disclosed publicly.

The AI technology I want to use does not appear on the AI Technology Roadmap. Does that mean I can't use it?

The purpose of the COV Artificial Intelligence Technology Roadmap is to identify AI solutions that have been reviewed for use, and areas where an agency may wish to investigate. If a particular AI solution is not listed on the roadmap, then an agency can submit the solution to the AI Registration and Oversight Approval process.

All proposed uses of AI technology need to be considered for registration, per the guidance above, regardless of whether they appear on the AI Technology Roadmap.

We want people within the agency to be able to use generative AI solutions like Copilot and ChatGPT for research and to kickstart ideas. Does this need to be registered?

Uses of generative AI tools that

  • are internally facing
  • do not produce decisions or policies
  • do not use Commonwealth Data or people data
  • are not used in pursuit of core agency business processes
  • not incorporated in a production system

do not require AI registration.

What if my AI technology incorporates people data?

An agency utilizing people data in any AI solution must be able to describe how that data is used and protected.

  • Detail the specific data elements that will be included, a justification for their use and a statement of value for what will be produced
  • Explain how the dataset will be secured, and identify individuals or roles that will have access to the data
  • Explain the operation of the AI algorithm on the dataset and how the output is produced
  • Identify how any output will be appropriately anonymized

Commonwealth Data shall not be used in the development and/or training of AI models.

People are bringing AI recorders to public meetings and recording the proceedings. Should we allow them to do that?

When citizens bring AI recorders to meetings, either public or private, they are allowed to record so long as they are not disruptive. Virginia is a single-party consent jurisdiction (Va. Code § 19.2-62), which means that as long as one participant in a conversation consents to the communication being recorded, it is not illegal to record the conversation. Also, FOIA or other laws may provide a right to record meetings that are open to members of the public. Consult your legal counsel if you have questions about these issues.

How can I tell if a solution our agency is using contains AI?

You should expect that products across the IT landscape will grow to incorporate AI capabilities in some form and that will likely include software solutions that your agency currently has in place.

  • Agencies should be attentive to changes to products within their portfolio for introduction of AI components or capabilities
  • Agencies should be cautious about vendor claims that exaggerate or misinform with regard to a product or service’s use of AI to generate customer appeal, a practice known as AI washing

There is increasing presence of AI in common commercial products. Do we need to register everything?

Regarding AI embedded within common commercial products, our interest in registering the technology should focus on whether the AI is a) making alterations to inputted data, or b) making outbound decisions.

Mere existence of AI within a product is insufficient for registration; we are only interested in registration when the AI component is used to produce an end product. An example would be Archer, which has an AI module, but COV does not use it and it cannot be invoked separately by an end user.

We want to develop a custom AI solution. What data are we allowed to use to use as part of training the AI and what data do we need to be concerned about?

Any COV-developed AI solution must be able to demonstrate permission to use the datasets in question, including those from publicly available sources, and must be able to enumerate those sources in order to explain the operation of the solution. All datasets employed with AI solutions must be documented in the COV Enterprise Architecture tool. Consult your legal counsel or the Office of the Attorney General regarding intellectual property concerns or other permission for use requirements.

Is it okay to use AI to generate software code for use in our production systems?

AI used to generate code must have human oversight of the implementation of what is produced. A human must inspect the generated product prior to insertion to a branch and the solution must be tested prior to production release.