In this article you will learn about why and how to extend EFWD file. Mainly, the role of EFWD is to connect the Helical Insight Application with different datasources with the help of plugins available for the datasources. The various databases it supports like mysql, oracle, postgres, SQLite and so on. Now, for a business user it plays an important role as it supports multiple databases for report creation. As you are aware of that the technology doesn’t last for a longer period hence EFWD helps your business in getting connected with these latest technologies. This will save your time and cost.
Why to Extend EFWD
The EFWD file enables the dashboard to access the data required for generating reports. Originally, the EFWD file is configured to pool data from Mondrian and all RDBMS that support MySQL queries. However, if the user requires acquiring data from other data sources, HDI allows the user to develop his own plugin for his chosen data source and use it to get data to make his reports.
Architecture
How to Extend EFWD
EFWD can be extended by adding a plugin and making changes in some settings files. In this section the contents of the plugin and the required configuration file modifications will be explained.
The example provided here is for creating a plugin for importing the data from a CSV file.
The Concept:
To maintain uniformity, the data from any data source that does not support MySQL queries must be imported into a database of your choice. (The database used for explanation is SQLite).
Prerequisite:
⦁ The CSV file should be a proper CSV with a well defined delimiter.
⦁ The first row of the CSV file should contain the column headers.
⦁ The Driver Should be placed in the “System -> Drivers” Folder.
⦁ The Package name SHOULD NOT have a similar name as the Application (com.helical.xxx.xxx).
Creating a PROPERTIES File :
- The PROPERTIES file contains the database details where the data is to be dumped.
- This PROPERTIES file is to be placed in the solution directory, in “System -> Admin”.
The contents of this file (here – csvDataSource.properties) should be in the following manner:
Note : Since SQLite does not need a “username” and “password”, these still need to be mentioned and equated to dummy values.
Configuring setting.XML :
The setting.xml needs to be modified in order for the report to be able to use the user defined data source.
“setting.xml” is located in the file repository in folder “System -> Admin”.
In this file, a new “<DataSource/>” tag needs to be added inside “<DataSources></DataSources>”.
The new “<DataSource/>” tag, the following fields must be present:
- class : The class which holds the actual functioning of the plugin.
- classifier : Unique identifier for the datasource.
- name : The name given to identify the datasource.
- type : This is the type used in the EFWD file of the report for type recognition of the data map.
Change in the .EFWD File:
The connection specified in the .efwd file should be in the following format:
- The Connection “type” should be the same as what is specified in the “<DataSource/>” in “setting.xml”.
- The tags inside the “<Connection></Connection>” tag should be noted carefully (here – “<dir></dir>” and “<file></file>”) since the data in these tags will be used in the data dumping code of the plugin.
- The “<dir></dir>” tag holds the path of the directory where the source file is stored.
- While, “<file></file>” tag holds the source file name with its extension.
Driver Creation:
To create a driver to handle CSV as data source, the driver must implement the “IDriver” class belonging to the Application. The Classes that are required to be imported, are mentioned in the sample code.
Sample Code
import java.io.File; import java.sql.Connection; import java.util.Map; import net.sf.json.JSONObject; import com.helical.efw.drivers.EfwdQueryProcessor; import com.helical.efw.drivers.IDriver; import com.helical.efw.drivers.JDBCDriver; import com.helical.efw.singleton.ApplicationProperties; import com.helical.efw.utility.PropertiesFileReader; import com.helical.efwd.jdbc.IJdbcDao; import com.helical.framework.utils.ApplicationContextAccessor; public class CsvDataSourceDriver implements IDriver { @Override public JSONObject getJSONData(JSONObject requestParameterJson, JSONObject connectionDetails, JSONObject dataMapTagContent, ApplicationProperties applicationProperties) { String dir = connectionDetails.getString("dir"); String file = connectionDetails.getString("file"); // Create the complete path of csv file String csvFilePath = applicationProperties.getSolutionDirectory() + File.separator + dir + File.separator + file; // Create table name and database name having the same name as the //csv file String tableName = file.substring(0, file.length() - 4); String dbName = tableName + ".db"; String dbFilePath = applicationProperties.getSolutionDirectory() + File.separator + dir + File.separator + dbName; PropertiesFileReader map = new PropertiesFileReader(); Map<String, String> read = map.read("Admin", "csvDataSource.properties"); String csvDriverUrl = read.get("csvDriverUrl"); String driver = read.get("Driver"); String username = read.get("username"); String password = read.get("password"); File fileToCheck = new File(dbFilePath); if(!fileToCheck.exists()) { CsvToDatabaseDumpHandler.dump(csvFilePath, tableName, dbFilePath, csvDriverUrl, driver, username, password); } String query = getQuery(dataMapTagContent, requestParameterJson); IJdbcDao bean = ApplicationContextAccessor.getBean(IJdbcDao.class); Connection connection = JDBCDriver.getConnection (csvDriverUrl+dbFilePath, username, password, driver); return JSONObject.fromObject(bean.query(connection, query)); } @Override public String getQuery(JSONObject dataMapTagContent, JSONObject requestParameterJson) { EfwdQueryProcessor queryProcessor = new EfwdQueryProcessor(); return queryProcessor.getQuery(dataMapTagContent, requestParameterJson); } }
Sample Explanation:
public JSONObject getJSONData(JSONObject requestParameterJson,
JSONObject connectionDetails, JSONObject dataMapTagContent,
ApplicationProperties applicationProperties)
To get any data, we must first get 4 JSONObjects (method : getJSONObjects) – Parameters:
- requestParameterJSON – Contains the directory of the .efwd file and the map id to be used from the same file.
- connectionDetails – Contains the details of the connection tag in the .efwd file.
- dataMapTagContents – Contains the SQL query in the map tag of the .efwd file.
- applicationProperties – Contains the repository details.
String dir = connectionDetails.getString(“dir”);
String file = connectionDetails.getString(“file”);
From the connectionDetails JSON the source file (here – .csv file) and its containing directory location are taken.
- // Create the complete path of csv file
String csvFilePath = applicationProperties.getSolutionDirectory()
+ File.separator + dir + File.separator + file; - // Create table name and database name having the same name as the //csv file
String tableName = file.substring(0, file.length() – 4);
String dbName = tableName + “.db”; - // Create database at the same location as the csv file.
String dbFilePath = applicationProperties.getSolutionDirectory()
+ File.separator + dir + File.separator + dbName;
- PropertiesFileReader map = new PropertiesFileReader();
Map<String, String> read = map.read(“Admin”,
“csvDataSource.properties”);
String csvDriverUrl = read.get(“csvDriverUrl”);
String driver = read.get(“Driver”);
String username = read.get(“username”);
String password = read.get(“password”);
To be able to connect to the database, we need the details that are specified in the PROPERTIES file (as mentioned above). Using the method “PropertiesFileReader”, a HashMap is created and the values of the driverURL, Driver name, username and password are obtained.
- File fileToCheck = new File(dbFilePath);if(!fileToCheck.exists()) {
CsvToDatabaseDumpHandler.dump(csvFilePath, tableName, dbFilePath, csvDriverUrl, driver, username, password);
}
First check if the database containing the CSV exists. If it doesn’t exist, create it, otherwise use the existing database.
“CsvToDatabaseDumpHandler” is the method which contains the logic for converting the CSV to a Database table.
- String query = getQuery(dataMapTagContent, requestParameterJson);
IJdbcDao bean = ApplicationContextAccessor.getBean(IJdbcDao.class);
Connection connection = JDBCDriver.getConnection
(csvDriverUrl+dbFilePath, username, password, driver);
return JSONObject.fromObject(bean.query(connection, query));
The “query” is obtained from the “getQuery” method in the form of a String. This IJdbcDao can help you perform most trivial JDBC tasks like inserting parameters into PreparedStatement’s, iterating ResultSet’s etc.
This returns a JSONObject containing the result of the Query provided.
- public String getQuery(JSONObject dataMapTagContent,
JSONObject requestParameterJson) {
EfwdQueryProcessor queryProcessor = new EfwdQueryProcessor();
return queryProcessor.getQuery(dataMapTagContent, requestParameterJson);
}
The “getQuery” method gets the query from the .efwd file by using the directory path and map id provided in the requestParameter JSONObject and getMapTagContent JSONObject respectively.
For More Info,
Contact us at demo@helicalinsight.com