
Energy Meter Testing
KPM -THREE PHASE ENERGY METER REFERENCE (KPM MT 3000D)

MT 3000D is a state of art portable 3 phase energy meter reference for testing and calibration of energy meters . MT3000D is an easy to use & light weight device. MT 3000D comes in two accuracy options ( 0.05% & 0.02% )
MT3000D is a all in one unit with below functionalities
-
Automatic wrong connection test
-
Ratio test
-
Harmonic test
-
Meter Accuracy Test ( 0.05 / 0.02 Class )
-
Report Print
KPM 1PH Energy Meter Reference
(KPM EMR 1)

EMR 1 is a state of art portable 1 phase energy meter reference for testing and calibration of energy meters . EMR 1 is an easy to use & light weight device. EMR 1 comes with accuracy of 0.3%
Specifications :
1. Test meter errors without disconnecting mains supply.
2. Optional dummy load box
3. Certificate: ISO 9001
Frequently Asked Questions ( FAQ )
- 01
The purpose of reference testing in energy meter calibration is to verify and ensure the accuracy of the energy meter by comparing its measurements against a highly precise and traceable standard—called the reference meter or standard. This process identifies any measurement errors or deviations in the energy meter under test, allowing for correction or adjustment to meet specified accuracy classes. Reference testing helps maintain measurement reliability, billing fairness, and compliance with industry standards.
- 02
The accuracy class of an energy meter is determined by evaluating its measurement error under a range of standardized test conditions during calibration. This involves comparing the meter’s energy readings against those of a highly accurate reference standard meter over a set period and at various load levels and power factors.
The calibration process typically tests the meter at multiple points such as:
Light load (e.g., 10% of rated current)
Medium load (e.g., 50% of rated current)
Full load (100% of rated current)
Additionally, measurements are taken at different power factors, including unity (1.0), lagging (inductive), and leading (capacitive) conditions, to simulate real operating scenarios.
At each test point, the percentage error is calculated by comparing the meter’s recorded energy to that of the reference meter. The accuracy class is assigned based on whether these errors stay within the maximum allowable limits defined by standards such as IEC 62053 or ANSI C12.20. For example, a Class 1.0 meter must not exceed ±1% error under these conditions.
Consistent performance across all test points confirms the meter’s accuracy class, ensuring it meets the required precision for billing or monitoring applications.
- 03
What are the common standards and regulations followed for energy meter calibration?
Energy meter calibration is governed by international and regional standards to ensure accuracy, reliability, and uniformity. The most widely followed standards include:
IEC 62053 series: International standards specifying performance and accuracy requirements for different classes of electricity meters (e.g., IEC 62053-21 for static meters, IEC 62053-22 for active energy meters, and IEC 62053-23 for reactive energy meters).
IEC 60521 and IEC 60522: Standards covering calibration methods and testing procedures for electric meters.
ANSI C12 series: North American standards, such as ANSI C12.20, that define accuracy classes and testing protocols for electric meters.
OIML R46: International recommendation by the International Organization of Legal Metrology outlining accuracy requirements and test procedures for electricity meters used in billing.
National regulations: Many countries have their own legal metrology regulations and certification requirements to ensure meters used for billing comply with local laws.
- 04
The reference standard in energy meter calibration is a highly accurate and traceable device known as a calibration standard meter or reference meter. This equipment has a much higher precision than the meter under test, typically with an accuracy class of 0.02% or better. Common types include:
Standard reference meters: Precision static energy meters designed specifically for calibration, with traceability to national or international measurement standards.
Calibrated instrument transformers: High-accuracy current and voltage transformers to supply accurate test signals.
Precision power sources: Devices that can generate stable and controllable voltage and current at various loads and power factors to simulate real operating conditions.
Calibration benches or test rigs: Integrated setups that combine the above equipment to perform automated, controlled calibration tests.
Using these reference standards ensures that the energy meter calibration is accurate, repeatable, and compliant with metrology requirements.
- 05
Environmental factors like temperature, humidity, and atmospheric pressure can influence the accuracy of energy meter calibration.
Temperature: Changes in temperature can affect the electrical characteristics of meter components, causing measurement drift or errors. Most calibration standards specify temperature ranges within which tests should be performed to ensure consistency.
Humidity: High humidity can cause condensation or moisture ingress, impacting insulation resistance and electronic circuits, leading to inaccurate readings during calibration.
Atmospheric pressure: Variations in pressure can subtly affect electrical properties, especially in sensitive equipment, although its impact is generally less significant than temperature or humidity.
To minimize these effects, calibrations are ideally conducted in controlled laboratory environments with stable temperature and humidity, or environmental conditions are recorded and accounted for in the calibration report.
- 06
Static calibration tests the energy meter under steady-state conditions by applying fixed voltage and current values at set loads and power factors. It measures the meter’s accuracy when the electrical parameters remain constant, helping to verify basic performance and error under controlled, stable conditions.
Dynamic calibration, on the other hand, evaluates the meter’s performance under varying, real-world operating conditions where voltage, current, and load fluctuate over time. It simulates actual usage patterns to assess how accurately the meter records energy during transient events, load changes, and power quality variations.
While static calibration is simpler and faster, dynamic calibration provides a more comprehensive assessment of meter accuracy in practical scenarios, especially important for modern smart meters and complex electrical systems.
- 07
Energy meters should typically be recalibrated every 3 to 5 years, depending on regulatory requirements, manufacturer recommendations, and the operating environment. Frequent recalibration helps detect any drift in accuracy caused by aging, environmental factors, or mechanical wear.
In critical applications or harsh conditions, more frequent recalibration may be necessary. Utilities often follow national standards or legal metrology guidelines that specify maximum intervals to maintain measurement reliability and billing fairness.
- 08
Checking the linearity of an energy meter involves verifying that the meter’s measurement error remains consistent across a wide range of loads. The process includes:
Apply multiple test currents: The meter is tested at different load levels, typically ranging from low (e.g., 10% of rated current) to full load (100%) and sometimes even above.
Maintain constant voltage and power factor: During each test point, voltage and power factor are kept steady to isolate the current’s effect.
Record meter readings: The energy measured by the meter under test is compared against a reference standard meter at each load level.
Calculate percentage error: For each load, the error percentage is calculated based on the difference between the test meter and reference meter readings.
Analyze results: A linear meter will show minimal variation in error across all loads. Significant deviations indicate non-linearity, which can affect billing accuracy.
This test ensures the meter accurately measures energy consumption regardless of load size, essential for fair billing and reliable performance.
- 09
During calibration, errors in energy meters are identified by comparing the meter’s recorded energy values to those of a highly accurate reference standard under controlled test conditions. The percentage difference between the meter reading and the reference reading reveals the magnitude and direction of the error.
If errors exceed acceptable limits defined by standards, corrective actions may include:
Adjustment: For mechanical meters, physical adjustments (like repositioning the dial or adjusting the braking magnet) can reduce errors.
Reprogramming or firmware updates: For electronic meters, software recalibration or parameter tuning may correct measurement deviations.
Component replacement: Faulty parts, such as sensors or electronic modules, may be replaced to restore accuracy.
Rejecting the meter: If the errors cannot be corrected within tolerance, the meter may be deemed unfit for use.
After correction, the meter is re-tested to confirm that errors now fall within the required accuracy class, ensuring reliable measurement and billing.