Saturday

08-16-2025 Vol 2054

SEPTA Expands AI-Powered Bus Lane Enforcement Amid Budget Challenges

In 2023, SEPTA kicked off a pilot program in Philadelphia featuring artificial intelligence-powered cameras on seven of its buses. The initiative yielded immediate results: in just 70 days, the cameras flagged over 36,000 vehicles obstructing bus lanes. This pilot provided SEPTA with critical data on bus lane obstructions and demonstrated the potential of AI technology to address urban transportation issues.

Fast forward to May 2025, when SEPTA, in collaboration with the Philadelphia Parking Authority, officially rolled out this AI enforcement program citywide. More than 150 buses and 38 trolleys across Philadelphia are now equipped with similar AI systems that monitor bus lanes for violations. These cameras utilize computer vision technology to detect vehicles blocking designated lanes and scan license plates of offenders. In cases where potential infractions are identified, a human reviewer confirms the violation before issuing a fine, which varies based on location: $76 in Center City and $51 in other areas.

This implementation comes at a time when SEPTA faces a significant budget shortfall, amounting to $213 million, which has raised concerns over imminent service cuts and fare hikes.

As a professor of information systems and the academic director of Drexel University’s Center for Applied AI and Business Analytics, I emphasize the importance of navigating the intersection of technology and public trust. The center’s research focuses on the utilization of AI within organizations, and what implications this carries for concepts like trust, fairness, and accountability.

Our recent survey involving 454 business leaders from various sectors—technology, finance, healthcare, manufacturing, and government—revealed a concerning trend: AI is frequently adopted more rapidly than the necessary governance structures are established to ensure its function is fair and transparent. This gap between swift deployment and adequate oversight is particularly pronounced in public-sector organizations.

Given this backdrop, it is essential for SEPTA to administer its AI enforcement system carefully, aiming to build public trust while also mitigating potential risks.

When vehicles block bus lanes, traffic congestion ensues, leading to delays that can affect commuters’ schedules, causing missed connections and late arrivals. The frustration that arises from unreliable transit is a significant concern for ridership. Therefore, if AI enforcement can enhance the flow of traffic by ensuring bus lanes are clear, it can indeed be seen as a positive development. However, the implementation of such systems cannot solely rely on good intentions; it must also be perceived as fair and trustworthy.

Our survey highlighted that over 70% of respondents do not fully trust their own data. This is especially alarming in the context of public enforcement. If data used for AI-powered ticketing is unreliable, there may be costly ramifications—including incorrect citations that require refunds, time lost for staff in rectifying errors, and potential legal disputes. Public confidence is crucial, as people are generally more inclined to comply with regulations and accept penalties if they believe in the accuracy and transparency of the enforcement process.

Further emphasizing this point, only 28% of organizations reported having a well-established AI governance model in place. Governance frameworks serve as the necessary measures that ensure AI systems are trustworthy and align with human values. This concern is elevated when it comes to public agencies like SEPTA that administer penalties based on data-driven enforcement.

A potential comparison could be drawn between this AI ticketing system and conventional red-light or speeding cameras. While both systems identify rule-breaking behavior and involve human oversight before citations are issued, there is a distinct difference in public perception when the term ‘AI’ is introduced.

The phenomenon known as the framing effect suggests that referring to a system as AI can lead to increased skepticism. Research has shown that regardless of the process’s reliability, people are more likely to question AI-driven decisions than similar non-AI processes. This widespread apprehension means that public agencies must align their AI enforcement initiatives with transparent practices, robust safeguards, and accessible methods for contesting errors. Such measures enhance public trust in AI enforcement systems.

Past incidents highlight the potential pitfalls of AI-powered enforcement systems. For instance, in late 2024, AI cameras on Metropolitan Transportation Authority buses in New York City erroneously issued thousands of parking tickets, with many cases involving drivers adhering to the law. Although such errors may be infrequent, they can significantly undermine public confidence in the system.

To foster trust in AI enforcement tools, the Organization for Economic Cooperation and Development underscores that public acceptance hinges on the understanding behind decisions made by AI and the presence of straightforward avenues to contest inaccuracies.

For SEPTA, rebuilding trust could involve several key steps:
1. Publishing clear rules regarding bus lane usage and any exceptions to them.
2. Outlining the safeguards in place, such as human review of every camera-identified violation.
3. Establishing a straightforward and transparent appeals process that includes management oversight and the right to contest a citation.
4. Sharing data on the volume of violations, appeals, and decisions made.

These measures indicate a commitment to fairness and accountability, transforming the perception of the ticketing process from a mere automated system into a reliable public service that the community can trust.

image source from:metrophiladelphia

Abigail Harper