Hmmm, bet you can guess the answer to that question. Still, you might be surprised to know that the Dept. doesn't independently survey any borrowers to measure satisfaction levels, which I think is dangerous. The 2008 RFQ for collectors of defaulted federal student loans were to have the allocation of accounts determined by how they stack up against these criteria:
(2) PERFORMANCE INDICATOR #2: ACCOUNT SERVICING PERCENTAGE (ASP) - 20 Points.
ASP is the proportion of the sum of the net number of:
- Accounts approved, and if required returned, for administrative resolution (only one administration resolution is counted in CPCS per account resolved), and
- Accounts that had payments received during the CPCS surveillance period.
(4) SMALL BUSINESS SUBCONTRACTING – A plus or minus range of points.
(5) SERVICE QUALITY (SQ) – A plus or minus range of points. The Government may measure a variety of mostly objective factors that contribute to the quality of service provided to ED and its borrowers. These factors may include accuracy and completeness, rejections, bounced checks, customer satisfaction or other factors.
---------------------------------
Hmm..."may measure," "may include" followed by a list of factors. Service quality sounds kinda undefined, don't you think?
So, let's go back to the 2004 RFQ where one of the performance criteria listed was:
So, what happened in between 2004 and 2008 to eliminate this customer service measurement. For that, we turn to the 2008 Q&A:
A. ED has revised the RFQ to eliminate the Service Quality (SQ) performance indicator at the start of the contract, but retained the right to implement one at a later date. Also, the small business subcontracting measure applicable to PCAs in the unrestricted pool was separated from the SQ performance indicator. Implementation of the SQ measure will most likely await the availability of more objective data from improved ED systems.
The goal of the customer service measurement on the 2004 task orders was to gauge the level of service through a variety of compliance measurements including error rates in AWG validation, EFTs, P-note requests, borrower complaints, compromises, etc. A number of these areas allow for a level of discretion and judgment that is not purely objective or based on system-generated data. This element of subjectivity was in conflict with the goals of CPCS, which is to measure performance with an objective measurement.
----------------------------------
Am I the only one concerned about a system that 1) rewards collectors for their ability to collect above all other factors; 2) relies on the collectors to self-report complaints; and 3) does not independently measure customer satisfaction (through surveys of actual defaulted borrowers)? It might be good for taxpayers but heaven help students that might have legitimate issues and concerns with their defaulted loans. What's the incentive?
Interesting to contrast that with the new DL servicing contract which surveys three groups (borrowers, schools and Dept. of Education personnel) in determining allocations of accounts. The implicit message to students: If you have the misfortune of being one of the five million accounts in default, your satisfaction means nothing to us, we just want to collect.
Comments
You can follow this conversation by subscribing to the comment feed for this post.