Apple Event: May 7th at 7 am PT

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Database

I'm thinking of using a plain text format for a data file format for my app/platform. It seems to me that I can use string programming to read and write data from a plain text file, and since almost all of the data needs to be read and written in series rather than searched through like a dictionary-type database, a plain text file seems like a good solution.


The problem is converting the numeric values to objects, performing a calculation, and placing them into an array.


So my questions include 1) Am I missing something before I go ahead and make plain text files my format for application data?

2) What are some of the efficient ways to take comma separated float values in a string from a location or file (after locating the symbol used by the reader to identify the beginning and end of the data string), convert each into an object, perform a calculation, and add that value to an array? I need several such sub-arrays, which then have to be re-calculated together. Will a new stack be opened for a new array, or should I direct the computer to open a new stack for each new array that is needed simultaneously?



Let's say I have these values in a string read from a text file: "12.3,12.5,12.6,13.2,13.4,13.8"


So let's say I take the string, read to a comma, convert it into an NSNumber object, all assigned to an instance variable. Then I do the same for the second value, etc. Is it better to take a couple of values and perform the calculation, move the result to another array, and then re-use the instance variables, or is it better to store an array of the NSNumber objects before doing any calculations? Should I read each value, assign a variable, and then put all the variables and calculation directives onto a queue?

iMac

Posted on Oct 20, 2012 5:41 AM

Reply
Question marked as Best reply

Posted on Oct 20, 2012 8:01 AM

mark133 wrote:


I'm thinking of using a plain text format for a data file format for my app/platform. It seems to me that I can use string programming to read and write data from a plain text file, and since almost all of the data needs to be read and written in series rather than searched through like a dictionary-type database, a plain text file seems like a good solution.


Don't do that.


The problem is converting the numeric values to objects, performing a calculation, and placing them into an array.


Among other problems.


1) Am I missing something before I go ahead and make plain text files my format for application data?


Yes. That is a royal nightmare.

2) What are some of the efficient ways to take comma separated float values in a string from a location or file (after locating the symbol used by the reader to identify the beginning and end of the data string), convert each into an object, perform a calculation, and add that value to an array? I need several such sub-arrays, which then have to be re-calculated together. Will a new stack be opened for a new array, or should I direct the computer to open a new stack for each new array that is needed simultaneously?


Hmmmm... Perhaps the only thing worse than plain text is comma delimited text. Really, you don't want to go there.


Why not use Core data? I know it can be difficult to get your head around, but it will be worth it in the end. Start slow. Do some practice apps. Just never roll your own data format.

43 replies

Oct 23, 2012 4:50 AM in response to etresoft

So it looks like this is where I've ended up, too. Looking at XML with its wide use (including support on my current system), but looking to JSON for the simple, serialized data required for the application.



http://digitalbazaar.com/2010/11/22/json-vs-xml/



I'm still not sure what you mean about files, data containers, and managing a service. It looks like the same discussion happening in the link. Is that the cloud-type design I should be trying to see more clearly?

Oct 23, 2012 5:49 AM in response to mark133

That's a very good link. The reply by the author in the comments is also quite insightful. I've experiened this a couple of times, spending years learning C++ and XML only to realize that it is so much easier using other technologies. C++ and XML are fun and challenging but not very productive. For someone who likes fun and challenging riddles, it can be hard to turn your back on that. It is curious that the author may still be suffering from that issue with C++. Hopefully he has dumpted that and gone to Perl or PHP by now.


The issue with files is that I am trying to get you to think in terms of a system and not as a bunch of parts. Building a system is hard and hard to do well. It isn't a question of exchanging files. There are no files. Everything is in a database on the server and it is only structured data that moves back and forth. If necessarily for offline access, you may have client-side caches too - but not files.

Oct 23, 2012 6:12 AM in response to etresoft

Here's an outline of the system I'm trying to build:


If all professions adopt a standard for measurement that includes time and space coordinates and only a single variable measure (no ratios), then a platform, or any number of various platforms, can interpret that data together across many fields. Data set files can be searched as easily as videos on YouTube and the most amazing measurement relationships can be discovered with countless pointers to subsequent discoveries. I've tried this, after designing certain of the tools for financial management, and the results are phenomenal. Really, absolutely phenomenal.


I could streamline the project and just design a system for my own use, but the social value of more measurements, more platforms searching through 'data lists' and more discussion about possible causes of apparent variable change relationships is too great to ignore, and increases geometrically with the quantity of users.


There are new federal standards for geospatial data that look as if there is a federal effort to apply some of the concepts, but the entire design is simply too bulky to be of much use. I'm talking about one-click science on a scale that has never been thought possible. Needless to say, I've had difficulty gaining much interest. Academics don't like the idea of scientific study for the masses, government is not particularly excitable about the idea of popular knowledge, and individuals simply don't know what on earth I'm talking about.


So I want to launch this app with a data container or else data file system that is easy to share and easy to adopt by other competing platforms, once the concept is seen and understood from the example platform that I am developing.


Right now, it looks like I'm settling on JSON if I can upgrade my XCode to have access to the JSON classes without changing my operating system. I also need the files, or data containers, to have a label with attributes that are easily searchable over the web. I'm not sure how to do that part, or what 'meta tags' are the best to use to generate web-searchable data container labeling information?

Oct 23, 2012 10:45 AM in response to etresoft

OK, it looks like there are some limitations to data sharing that I didn't know about. I guess my model has to change a little. The system has to include an analysis platform application as well as a website application along with the iOS client application. An e-mail regarding a data set would reference the URL for the data set. Is that more like what I should be thinking?

Oct 23, 2012 4:30 PM in response to etresoft

OK, I just opened a .xcdatamodeld and have added the entities. It feels like a big step because I can now see how the formulation of the CoreData model can be carried out into any direction as the core for the data model. I know, I should have guessed that from the name, but some of us have to learn the round-about way!


I feel pretty good about my understanding of the PSCoordinator, manager, etc classes needed to make the core data model function in the application. The elimination of difficulties is difficult. So I really feel that my difficulties with this must be approaching an end (and a beginning of time without difficulties).

Oct 23, 2012 4:36 PM in response to mark133

Why ever would most professions, much less all professions, adopt the standard for measurements that you propose? For many fields, temporal and spatial data is useless (or at least pointless). And for far more fields, representing only single variables obscures far more than it reveals - there's a reason that multi-variate statistics have been developed.


I think you're putting too much faith in the capabilities of the Mark-I eyeball.


Regardless, even for those areas where such measurement will work, you are either looking a a classic client-server model for accessing your data from a central location, or some kind of distributed application. Client-server is clunky and probably always will be. Distributed applications are elegant but nearly impossible to build (unless you're a Ph.D. in Computer Science...) Something like Ciel and Skywriting.


Etresoft is probably quite correct, you should start relatively small and then scale up.

Oct 23, 2012 6:20 PM in response to g_wolfman

http://www.youtube.com/playlist?list=PL80289C80EF4B3428&feature=view_all


Thank you for your interest in the Base1 platform...


The tools I developed as a financial management professional in conjunction with studies of classical philosophy.


The link at top is a partial introduction to compare some of the key similarities and differences between modern scientific analysis and the Base1 Analysis System.


Please keep in mind, this is only one of several projects I have been involved with in recent years, and most of them have been more urgent, necessary and more demanding. Even now, I have only a part of my available time to devote to the development of this platform.

Oct 23, 2012 6:38 PM in response to g_wolfman

http://www.youtube.com/watch?v=9-_OqUCcmDE


The above link is a Base1 map by which I was able to discover a migrating pattern to cancer in the USA, from both the northern and southern borders towards the southern central USA. Using Department of Forestry data, I also found that the migration pattern follows along certain hardwood forests and also stops or slows at major roadways. I also discovered a pattern in the Northern area that very closely matches a 1976 survey of a certain edible berry. This was all in about 5 hours, but with the proper platform it could have been 5 minutes, and with more data sets, it could have been many more discoveries and more investigation into causes and relations.


The study sparked several congressional bills for cancer-mapping related funding, but they are simply missing the point. This is only the tip of the iceberg.

Database

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.