Automation isn’t intelligence, but the real value of automation, especially software automation, is in removing the bot from humans. The global pandemic has shown how valuable automation has become to businesses and governments alike, making sure operations run smoothly, bottlenecks avoided, and repetitive tasks can be relieved of people to free them up for higher-value tasks.
Timothy Goodwin, Associate Director, Office of Organizational Policy and Governance at the United … [+]
However, if you want to get real benefit from automation and Robotic Process Automation (RPA) in particular, it is important to understand what these bots can and cannot do, and how to use AI to handle more complex tasks. USPTO uses automation and AI to improve operational efficiency and empower its highly skilled examining corps. In addition, they automate various processes to reduce the manual burden on their reviewers.
Timothy Goodwin, associate director, Office of Organizational Policy and Governance at the United States Patent and Trademark Office (USPTO) shares how they use automation and cognitive technology at America’s Innovation Agency. Timothy will be speaking at one of the next ATARC CPMAI “Methodologies and Best Practices for Successive RPA Implementation” on July 21, 2021 from 2:00 p.m. to 3:00 p.m. to delve deeper into some of the following questions.
How do you use automation at USPTO?
Timothy Goodwin: The depth and breadth of the automation technologies used by the USPTO is enormous. It’s a critical factor in increasing business value. Recently we have used AI / ML to reduce the manual patent classification actions of an examiner; RPA to gain valuable time for performing trademark application exposure checks; and Virtual Data as-a-Service (vDaaS) to increase the quality of applications in development by providing test data as required. All of this has helped drive more and more automation capabilities and enable our agency to offer higher quality services to the public.
How do you identify which problem areas to start with for your automation and cognitive technology projects?
Timothy Goodwin: I’ll narrow this question down and focus on RPA. When we first started our RPA program in 2019, we were looking for a USPTO process to demonstrate skills. This started with a “first-in-first-out” model, in which submitted requests only helped individual users or a small number of users. Since then, we’ve continued to evolve our ingestion process to take a broader look at the automation request and find critical problem areas that impact USPTO businesses. A recent example was the development of RPA solutions to reduce the backlog caused by the high volume of trademark applications filed in the past twelve months.
How do you measure the ROI for these type of automation, advanced AI and analytics projects?
Timothy Goodwin: Measurements are always based on the business value that arises from the automation functions demonstrated. This can take many different forms, depending on the solution being implemented. For cloud infrastructure deployment, it can be so easy to create a routine that kills unused virtual services when they are not in use to avoid unnecessary costs. RPA can be the number of hours of productivity reclaimed by a single or multiple automated process instances. The key figure is always geared towards asking ourselves: “How does this help to disseminate and grant high-quality patents and trademarks in a timely manner?”
What unique opportunities does the public sector have in terms of data and AI?
Timothy Goodwin: In general, the public sector is an administrator and has access to vast amounts of unique data that no other company in the world has access to. This, of course, from the overall perspective and not from views that are available via open data platforms. The combination of these unique data sets with AI offers immense potential to advance research in all disciplines known today. Quite simply, it is limitless. The challenges, on the other hand, are omnipresent and extend across legal, technical and ethical boundaries. However, I would like to reiterate our responsibility as data administrators and the maintenance of public trust. For me, this is the fundamental issue that should be addressed when deciding how to use data. Ultimately, the dilemma of using data and for what purposes related to AI needs to be explicitly defined and reviewed before we attempt to exceed public expectations.
How do analytics, automation and AI work together at the USPTO?
Timothy Goodwin: USPTO data is unique and so we have unique challenges and opportunities. The three areas are naturally interwoven and build on each other to enable advanced skills. Automation helps feed our patent and trademark data lakes, where preparations are made to improve data quality and security. This in turn feeds each other into our AI / ML models and will eventually be introduced and provide data insights and visualizations for broader groups. All of this helps create a sustainable environment for the agency to make data-driven decisions and ensure that USPTO can continuously provide high quality services.
What are you doing to develop an AI-enabled workforce?
Timothy Goodwin: Developing people with advanced technology is already a challenge for many federal agencies. At USPTO, we are fortunate to have strong leadership in data science, analytics and AI from Scott Beliveau and our director of new emerging technologies, Jerry Ma [For additional insights Jerry Ma presented at a previous AI In Government event, and Scott Beliveau will be sharing insights at the October 2021 AI In Government event]. With the support of their teams, they are breaking new ground for other USPTO employees by creating opportunities and enabling innovations. Targeted experimentation with AI that offers strong business value is one of the best tools we can use to develop our people. In a more practical sense, we have also expanded our workforce through traditional training and let many employees take part in various AI / ML and advanced analytics courses. [The best practices approach to doing AI and big data analytics is the CPMAI methodology, which large organizations are increasingly adopting.]
Which AI technologies are you most looking forward to in the coming years?
Timothy Goodwin: I’m really trying to keep an eye on how AI is performing in the cybersecurity research and development space. Much work and success has already been achieved in this area, so any modern AV product makes better use of AI for static analysis and dynamic analysis of trends. What interests me most is to see how AI can “heal” vulnerable or compromising systems in real time. Knowing how vulnerability research is traditionally done, there are numerous ways that AI can be used to prevent the feasibility of a bug from being exploited. Detecting and disseminating AI-driven patching actions before compromises are made is what I hope it matures in the years to come.