Record Locking In FileMaker 6 and earlier versions, record locking occurred when a guest on the network clicked into a field. That's was all that was required to prevent others from changing the record. Unfortunately, this also prevented other guests from copying data from a locked record and other basic features. FileMaker 7 has changed how record locking works. In order for a guest on the network to lock a record, he must actually modify a field, allowing user to work with a record without locking it. However, the most important change is when scripting for record locking. You don't want to test if the record is locked by setting a field to a value. FileMaker 7 introduces the Open Record/Request script step which attempts to lock a record without modifying it. If the record is locked, an error of 301 will be returned. If the record is not locked, this script will lock it until the Guest exits the record manually or the Commit Record/Request script step is intiated.
Jaymo David Knight started calling me JoMo years ago and it kinda stuck. It's evolved from JMO to the current Jaymo but you can call me anything as long as you are friendly. Other nicknames include Hoss, n2 and Wiz.
Professional developers pay attention to the number of files in their FileMaker solutions because it’s a crucial factor in good database design. Most developers utilize the standard single file structure because it makes development easier. On larger multifaceted projects, a developer might use several files for separate areas of the company. A smaller set of developers like to use the separation model to avoid issues with updates to their solutions and other benefits that will be explored. It’s very important to choose the file structure the best suits your style as well as the project at hand.
To separate or not to separate, that is the big developer debate. Before I debate my point of view, let’s define the separation model. It’s actually quite simple but very different from the standard single file development path. Instead of using a single file, the separation model places tables, fields and data in one file and layouts, scripts and other interface components in another file. The user never sees the data file, always working with the interface file. This is possible because FileMaker files can reference external FileMaker files as if they were local. There is no speed degradation. You can create layouts in the interface file based on data tables and then add records into the data file from the interface file. You can search data records from the interface file the same as if you were actually in the data file. The user will never know the difference.
Proponents of the separation model site the main advantage as facilitating offsite development without requiring an import. In the traditional single file development process, changes to a solution can be done in two ways. First, you can develop directly on the live file itself. Or, you can develop offline and then import the data from the live version.
Live Development The easiest approach is development on a live system. Of course, you don’t want to start a new project on FileMaker Server. I’m only talking about the inevitable changes that come six months down the line. Clients never know everything they want even with the best planning. Changes to a live system are immediately available to users, which is both good and bad. With changes like adding a report layout or a script, there is a big wow factor when the feature just appears. You can make the change and nobody will know you were developing till you reveal the last step such as adding a button to a layout. You don’t have to amass all the changes over months and then release it all at once.
BTW: The ability to make schema changes in Manage Database (Define Fields back then) on a live solution hosted by FileMaker Server was added way back in FileMaker 5.0v3.
The key is not to add something that has the possibility of causing havoc. For example, imagine creating or editing a script that contains the Delete All Records script step. What if there is a bug and the script starts deleting all the records in a table. This is the fear all developers contend with when developing on live systems and why backups are so important. It is an especially big concern when editing a script that is currently in use. What if someone runs it before you are done. Of course, you can disable the script or work on a duplicate but these scenarios need to be worked out ahead of time rather than learning the hard way.
Most developers, including myself, work on live systems because it is convenient for the client as well as the developer. It is also very reliable but there is still a chance that something could go wrong. If FileMaker unexpectedly quits due to software conflict or the power going out, there is always a chance of corruption. Most developers will revert to a backup if a FileMaker file ever crashes. Better safe than sorry. Therefore, it is a good idea to constantly backup your solution. Back up your solution as often as you are willing to recreate lost work. And, you can’t rely on standard backup systems because backing up an open database can create corrupt backups. Only use FileMaker Server to backup.
The most volatile time for a FileMaker file is during a commit. In the past, FileMaker used to have more corruption issues because it was saving changes as you made them. Since the advent of FileMaker 7, FileMaker only saves when exiting Manage Database and other modal dialogs, Layout Mode and when scripts are saved. A crash during this process of committing changes is the most common cause of corruption. Avoiding crashes is the best way to avoid corruption. Developing on a live system over a network or the internet prolongs the commit process, increasing the chance for a crash to interrupt the commit process. In addition, the chance of being dropped over a remote connection is always a possibility that can be avoided by developing in single-user mode from your local hard drive. I say local hard drive because I have seen many developers develop from a shared server thinking they are safe. FileMaker is a hard disk based system so working from a shared volume in single-user mode causes the same latency problem as working on a multi-user live system. The moral of the story is make sure you have a fast and reliable internet connection.
Live development also needs to contend with record locking. If someone is modifying a record and a developer needs to edit or add a field, the user will need to commit the record before the developer can exit Manage Database. While this won’t necessarily cause corruption, it can slow down the development process. In addition, there are enough moving pieces in a live environment that it is difficult to predict the impacts to data that live development could cause.
While FileMaker, Inc. recommends developing on a local copy of a file when asked directly, they do not publish any white papers regarding best practices. It is quite clear that live development is a valid approach simply by the fact that it is not blocked by the program. I have been working on live systems for over a decade and have never had any problems with corruption. It is a convenient development style that you should consider by weighing the upside and downside as directly related to the specific project and the needs of the client.
Importing I personally know several developers who will not develop on a live system. These developers have two choices left. Either they can choose the separation model or they can import data when a solution is updated. With the import approach, the developer works offline on a copy of the solution and then imports the current data into the new version of the solution. This allows for the convenience of a single file solution, which is the easiest and most natural development method for FileMaker. Combining interface and data into a single file is what makes FileMaker the program that it is. Removing it’s greatest strength should not be overlooked.
The downside of working offline with a single file solution is importing the current data. While the script to transfer data is not that complex, it often takes a long time to import the data. Imagine tables with millions of records. Importing could take all night. Yes, I said night because you will need to do the import while all users are offline. In addition, if you have one mismatched or forgotten field, you will have to start over again, if you even notice the mistake.
Updating by import also works best by collecting feature additions and enhancements over time. While this requires users to wait for features, it also gives the developer time to test a feature more thoroughly. While programming can be tested in a live system, any mistakes will be felt on live data, possibly requiring a backup. With careful planning, thoughtful programming and good testing, live systems can be safe but never as safe as programming offline.
Separation Model You are beginning to see why developing on a live system is so convenient when compared to working on that same solution offline. The separation model fits nicely between the first two approaches. It attempts to make updating a solution easier when working offline. For example, if you want to add a new report, that is done in the interface file. When you are done making the changes offline including layouts and scripts, simply replace the old interface file with the new one and you are done. It’s that easy! No importing required.
But, the development process is never that straightforward. I almost always add a new field when creating a report. It’s impossible to plan ahead for every possible change you might need to make in the future. If any change includes a field or a table, the data file will need to be modified, putting you right back where you started with the importing of data into the new version. Separation model advocates say that in a properly planned solution, tables and fields never need to be added. When pressed, proponents often reveal that extra generic fields of every type are added to every table to accommodate possible changes, Feature creep is a common enough occurrence that it has been given a name. I have been servicing clients for decades and one thing remains consistent, they always add more to a project. Not only that, these generic fields need to be modified to the right type and functionality, bringing separation model advocates right back to modifying a live system, or at least bringing down the FileMaker system to work on it locally. While the separation model approach can absorb some changes to the data file, it doesn’t allow complete flexibility.
IMHO You are probably reading this blog because you want my opinion so I’m not going to beat around the bush. I heartily dislike the separation model. I really don’t get why developers trade the simplicity and efficiency of a single file solution for a two file solution. Combined with live system development for unexpected additions, I think it is the ideal development approach.
I think the best way to explain my issues with the separation model is with examples. The first example is doubling of effort. Many things need to be done twice including security, scripting and some relationship graph elements, just to name a few. For example, every time a password is added to the interface file, it also needs to be added to the data file. The best solution is to write a script but that takes time and takes away from the convenience of simply adding credentials to a single file. Active Directory or Open Directory can be employed but that really is only an option at larger companies with an IT staff. FileMaker is mainly a small to medium database solution. In addition, different privilege sets still need to be defined in the two files. The interface file security protects layouts and scripts while the data file protects tables and fields. This splits the interface for security into two places, making a more complex interface.
One of the ways developers make money is by developing efficiently. The separation model requires that tables and fields be added to one file and layouts and scripts be added to another file. This process takes away from the natural development method in FileMaker. How easy is it to add a new field to a table in a single file scenario. It is best understood when viewing how difficult it is to add one in the separation model. Imagine you are working on report layout and need a new calculation field to attach to a sub-summary part. You can’t just type Command-Shift-D or Ctrl-Shift-D. You have to first select the data files and then enter Manage Database. This process only takes a few seconds but multiplied by the thousands of times you enter and exit Manage Database and all of a sudden you have added hours onto your project, not to mention interrupting your train of thought.
I make money by programming efficiently and the separation model slows down that process. Separation model proponents say proper planning avoids this issue by adding all your tables and fields at one time after a lengthy planning process. This takes away from the natural strength of FileMaker which is forgiveness. You can add, delete or edit a field or table at any time and FileMaker file will update references throughout the entire solution. Some aspect of organic development is what makes FileMaker unique in the marketplace. Don’t take that away. Plan your solution, especially in areas like relational structure, but also plan on your solution growing organically in other areas. Too much planning ends up costing time and money.
Other benefits of the separation model include the ability to have different interface files for different users or devices. Again, this complicates the entire process. I prefer just to have separate layouts in a single file to accommodate different interfaces. This allows you to work on multiple interfaces without switching to different interface files. Not to mention some interface elements like scripts often transcend device types and would need to be programmed multiple times with different interface files.