I query column names from a PostGIS table with following Python code:
colnames = []
sql_getcolumns = "select column_name from information_schema.columns where
table_name='shp2py';"
cur_obj.execute(sql_getcolumns)
colnames = [row[0] for row in cur_obj]
print colnames
Now I want to get the data types for every column and perform different tasks depending on the data type of the column:
sql_gettypes = "select data_type from information_schema.columns where
table_name='shp2py' and column_name="
for item in colnames:
cur_obj.execute(sql_gettypes + "'" + item + "',"[:-1])
datatype = str([row[0] for row in cur_obj])
#print datatype
if datatype == "integer":
arcpy.AddField_management(shpnew, item, "SHORT")
print "done"
elif datatype == "numeric":
arcpy.AddField_management(shpnew, item, "DOUBLE")
print "done"
elif datatype == "character varying":
arcpy.AddField_management(shpnew, item, "TEXT")
print "done"
else:
print "data type not found"
If uncommenting the ‘print datatype’ statement it does print every data type correctly like ['numeric'], ['integer'] and so on.
But when I try to implement the IF clauses it does not compare it correctly since it always jumps into the ELSE clause and outputs “data type not found” although there are types like integer and so on.
How can I modify this to get the correct results?