Csv to dbf python

Python — Convert CSV to DBF

I would like to convert a csv file to dbf using python (for use in geocoding which is why I need the dbf file) — I can easily do this in stat/transfer or other similar programs but I would like to do as part of my script rather than having to go to an outside program. There appears to be a lot of help questions/answers for converting DBF to CSV but I am not having any luck the other way around. An answer using dbfpy is fine, I just haven’t had luck figuring out exactly how to do it. As an example of what I am looking for, here is some code I found online for converting dbf to csv:

import csv,arcgisscripting from dbfpy import dbf gp = arcgisscripting.create() try: inFile = gp.GetParameterAsText(0) #Input outFile = gp.GetParameterAsText(1)#Output dbfFile = dbf.Dbf(open(inFile,'r')) csvFile = csv.writer(open(outFile, 'wb')) headers = range(len(dbfFile.fieldNames)) allRows = [] for row in dbfFile: rows = [] for num in headers: rows.append(row[num]) allRows.append(rows) csvFile.writerow(dbfFile.fieldNames) for row in allRows: print row csvFile.writerow(row) except: print gp.getmessage() 

2 Answers 2

Promising answer there (among others) is Use the csv library to read your data from the csv file. The third-party dbf library can write a dbf file for you.

For example, you could try:

Читайте также:  Экранирование sql запроса python

You could also just open the CSV file in OpenOffice or Excel and save it in dBase format.

I assume you want to create attribute files for the Esri Shapefile format or something like that. Keep in mind that DBF files usually use ancient character encodings like CP 850. This may be a problem if your geo data contains names in foreign languages. However, Esri may have specified a different encoding.

EDIT: just noted that you do not want to use external tools.

Источник

convert csv file to dbf

i’ ve got al lot of csv file and would like to convert them to a dbf file. I found the code from Ethan Furman (see below) It works really good — thanks a lot — but my csv files have as the delimiter a semicolon. So with the code python puts all my data into one column, but I’ve got 5 columns. How can I change the delimiter? here the link: Convert .csv file into .dbf using Python? especially:

import dbf some_table = dbf.from_csv(csvfile='/path/to/file.csv', to_disk=True) 

This will create table with the same name and either Character or Memo fields and field names of f0, f1, f2, etc. For a different filename use the filename parameter, and if you know your field names you can also use the field_names parameter.

some_table = dbf.from_csv(csvfile='data.csv', filename='mytable', field_names='name age birth'.split()) 

I’m not sure what code you’re referring to. It seems you’re missing a few links. I checked the code at code.activestate.com/recipes/362715-dbf-reader-and-writer, and it seems to convert from one dbf (Xbase) format to another (binary), not from CSV to dbf. Is that the right code?

1 Answer 1

Looking at the dbf code, I don’t see any way to pass a dialect, so you may transform your files as follows:

import csv reader = csv.reader(open('input.csv'), delimiter=';') writer = csv.writer(open('output.csv', 'w')) for row in reader: writer.writerow(row) 

Note: This will quote properly rows that already contain a comma as part of its contents.

Edit: If you’re willing to patch dbf.from_csv to accept delimiter as a parameter to avoid transforming all your csv files, this should work:

--- dbf.py.orig 2012-01-23 12:48:32.112101218 +0100 +++ dbf.py 2012-01-23 12:49:59.468534408 +0100 @@ -4502,13 +4502,14 @@ print str(table[0]) finally: table.close() -def from_csv(csvfile, to_disk=False, filename=None, field_names=None, extra_fields=None, dbf_type='db3', memo_size=64, min_field_size=1): +def from_csv(csvfile, to_disk=False, filename=None, field_names=None, extra_fields=None, dbf_type='db3', memo_size=64, min_field_size=1, + delimiter=','): """creates a Character table from a csv file to_disk will create a table with the same name filename will be used if provided field_names default to f0, f1, f2, etc, unless specified (list) extra_fields can be used to add additional fields -- should be normal field specifiers (list)""" - reader = csv.reader(open(csvfile)) + reader = csv.reader(open(csvfile), delimiter=delimiter) if field_names: field_names = ['%s M' % fn for fn in field_names] else: 

Источник

converting csv to dbf

enter image description here

I have a python created CSV that I am looking to turn into a dbf but it is not getting read properly by the TABLE TO TABLE conversion tool. I select the csv, set the output to be XXX.dbf but the output has 1 field. It contains the number of rows I need, but the values of each row is the comma separated data. I have no spaces or special characters. Arcgis 10.1 Here is the PYTHON used to create the table. This is my first time using the CSV writer; maybe I am implementing it wrong to feed the dbf.

#This script looks in a defined folder for shapefiles and creates the beginnings of a data dictionary. # Creator: David Campbell # Date: March 12, 2015 import arcpy, os, sys, csv,datetime, time from arcpy import env print 'Start @ ' + datetime.datetime.fromtimestamp(time.time()).strftime('%H:%M:%S') ############## User Values ############# project = 'P747' TSA_Table = 'TSA_LB_V2' CLS_Table = 'CLS_LB_V2' MDL_Table = 'MDL_LB_V2' ARIS_Table = 'ARIS_V5' RSA_Table = 'RSABLKS_V5' AVI_Table = 'AVI_ATTR_V2' FolderLocation = r"\\silver\clients\Projects\P747\3_Landbase\LB1\Submission_Data\LB_20150304" LargestNumberForUniqueValues = 20 SDELocal = "C:\\Users\\david\\AppData\\Roaming\\ESRI\\Desktop10.1\\ArcCatalog\\TFC.sde\\" Project_DB = "C:\\Users\\david\\AppData\\Roaming\\ESRI\\Desktop10.1\\ArcCatalog\\P747.sde\\" ORA = 'ORATFC01' ####################################### open(os.path.join(FolderLocation, "DD.csv"), "w") FCView = FolderLocation + "\\FCView.lyr" env.workspace = FolderLocation SDELOC = SDELocal if arcpy.Exists(FolderLocation + "\\DD.csv"): arcpy.Delete_management(FolderLocation + "\\DD.csv") # if arcpy.Exists(SDELOC + "DD_Table") == False: # print "SDE Table does not exist" if arcpy.Exists(SDELOC + "DD_Table"): arcpy.Delete_management(SDELOC + "DD_Table") if arcpy.Exists(SDELOC + "DD_Table") == True: print "True" sys.exit() a = "ID,PROJECT_NO,DATA_NAME,DATA_TYPE,DATA_LOCATION,SOURCE_CONTACT_NAME,PROJECTION,DATUM,UNITS,DATA_PRECISION,TOLERANCE,SCALE_OF_CAPTURE,GENERATION_ORIGIN,FEATURE_DESC,COLUMN_NAME,FLDTYPE_NAME,COLUMN_WIDTH,COLUMN_DECIMAL,COLUMN_DESC" csvfile = FolderLocation + "\\DD.csv" allrows = [] allrows.append(a) SDETable = [TSA_Table, CLS_Table, MDL_Table, ARIS_Table, AVI_Table, RSA_Table] #SDETable = ['TSA_LB_V2','CLS_LB_V2', 'MDL_LB_V2','ARIS_V5', 'RSABLKS_V5', 'AVI_ATTR_V2'] x = 1 for r in arcpy.ListFiles("*.shp"): if arcpy.Exists(FCView): arcpy.Delete_management(FCView) arcpy.MakeTableView_management(r, FCView) fields = arcpy.ListFields(r) desc = arcpy.Describe(r) for field in fields: FN = field.name DataString = str(x) + "," + project + ","+ r + "," + desc.shapeType + "," + FolderLocation + "," + "," + desc.spatialReference.name + "," + desc.spatialReference.GCS.datumName + "," + "Meters" + "," + "," + "," + "," + "," + "," + FN + "," + field.type + "," + str(field.length) + "," + str(field.precision) if field.type in ('String', 'SmallInteger'): UniqueValues = sorted(set([XX[0] for XX in arcpy.da.SearchCursor(FCView, FN)])) if int(len(UniqueValues)) >= LargestNumberForUniqueValues: b = DataString + ", Variable values" allrows.append(b) x += 1 if int(len(UniqueValues)) < LargestNumberForUniqueValues: for XX in UniqueValues: if XX == ' ': continue c = DataString + ", " + str(XX) allrows.append(c) x += 1 else: e = DataString + ", 0 - X" allrows.append(e) x += 1 env.workspace = Project_DB for SDE in SDETable: for SDEField in arcpy.ListFields(SDE): SDE_FN = SDEField.name ORASDEString = str(x) + "," + project + "," + str(SDE) + ",SDE Table" + "," + ORA + "," + "," + "," + "," + "," + "," + "," + "," + "," + "," + SDE_FN + "," + SDEField.type + "," + str(SDEField.length) + "," + str(SDEField.precision) if SDEField.type in ('String', 'SmallInteger'): UniqueValues2 = sorted(set([Y[0] for Y in arcpy.da.SearchCursor(SDE, SDE_FN)])) if int(len(UniqueValues2)) >= LargestNumberForUniqueValues: f = ORASDEString + ", Variable values" allrows.append(f) x += 1 if int(len(UniqueValues2)) < LargestNumberForUniqueValues: for XXX in UniqueValues2: g = ORASDEString + ", " + str(XXX) allrows.append(g) x += 1 else: h = ORASDEString + ", 0 - X" allrows.append(h) x += 1 with open(csvfile, "w") as output: writer = csv.writer(output, lineterminator='\n') for val in allrows: writer.writerow([val]) env.workspace = SDELOC if arcpy.Exists(FCView): arcpy.Delete_management(FCView) arcpy.MakeTableView_management(csvfile, FCView) arcpy.TableToTable_conversion(FCView, SDELOC, "DD_Table") print 'Completed @ ' + datetime.datetime.fromtimestamp(time.time()).strftime('%H:%M:%S') 

Источник

Saved searches

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Creates a legacy dbf file from csv files using python dbf lib

mikebrennan/csv2dbf_python

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

Latest commit

Git stats

Files

Failed to load latest commit information.

README.md

Creates a legacy dbf file from csv files using python dbf lib

This script converts CSV files to DBF

Requirements: Header.csv - this contains the DBF definition (required names ==> FIELDNAME,TYPE,LENGTH) 1 FIELDNAME,TYPE,LENGTH 2 STAFF,N,10 3 STAFF_NAME,C,101 4 DATEIN,D, 5 Body.csv - this contains your db info based on you header FIELDNAME definitions 1 STAFF,STAFF_NAME,DATEIN 2 1234,joe smith,20130925 3 1235,jill smith,20130925 4 dbfpy directory - This contains the library for dbf apis. this directory needs to be in the same directory as this script. Downloaded from : http://dbfpy.sourceforge.net/ Note: I have not tested this with DBF TYPE 'L' TRUE/FALSE. 

About

Creates a legacy dbf file from csv files using python dbf lib

Источник

Оцените статью