The Index Indexes makes finds and other features fast. Think of looking for a word in a book by flipping through each page. It’s much quicker to go to the index where it will tell you each page where the word is contained. This is exactly how an index works, it only stores each word once per field across all records. There are two types of indexing: Minimal and All. Minimal means only one index type has been created while All means two index types have been created. The most commonly used index type is a Word index and can only be created for text fields. Up to 100 characters of each word are indexed. Words are defined most often by spaces but can also be determined by commas, periods, colons and other characters. A word index does not differentiate between uppercase and lowercase. Searching for “FRED” also finds “fred”. A Value index is used for searching number, time and date fields as well as relationship key field matching. Up to 100 characters of each return-separated value are indexed. While searching can be done on partial text in a Word index, only complete values can be searched for in a Value index. When you think about it, why would you want to search for a partial number or date. Indexes are utilized in a variety of features other than finds such as value lists based on the contents of a field and unique validation. Understanding how an index works can help you better design a database.
Jaymo David Knight (aka Speedy) started calling me JoMo years ago, because of my initials, and it kinda stuck. Over the years, it's evolved to the current Jaymo but you can call me anything as long as you are friendly. Other nicknames include Hoss, n2 and Wiz.
Clients are always asking me why the contents of portals do not duplicate along with a record. It makes sense. Portal data looks like it's part of the parent record so it should duplicate just like the local fields, right? After explaining how relationships work, the client still wants the related data to duplicate. The great thing is, the script isn't that difficult to write. There's just a few gotchas, opportunities for efficiency and, of course, the patented Philosophy of FileMaker considerations.
The Example Just about every FileMaker solution contains multiple tables. In this article, let's use a simple example of companies and contacts at that company. A standard primary to foreign key connection is made between the tables to form a one-to-many relationship. Nothing crazy here, just the basics.
I guess I could have chosen a more realistic example but they seemed to complicate the process. For example, let's say I used an invoicing solution. I would have needed to explain a many-to-many relationship before showing how to duplicate related data. My purpose in choosing this example is to just keep things simple so we can focus on the goal of the article. Substitute your own real life example and don't pay any attention to whether you would do this in your own CRM.
Philosophizing So... how to approach this solution? I often sit for a few minutes juggling the different methods in my head when presented with a complicated hurdle to leap. You have to consider the advantages and disadvantages of each technique before settling on a particular procedure. Otherwise, you'll likely program yourself into a hole and require extra code to dig yourself out. Planning makes perfect.
Obviously, we need to use the duplicate record script step to easily duplicate the parent record data. Unfortunately, once the parent record is duplicated, the previous record is lost along with it's related data that we want to duplicate. This tells me the related records need to be preserved in some manner before duplicating the parent record.
The first idea that comes to mind is storing the primary key in a variable. With the primary key in memory, a find can be performed to locate the original record anytime. The downside of this approach is destroying the user's found set which is something you should never do. Okay, let's fix the issue by adding a new window step to the process so the found set in the current window is left untouched. Alas, this just complicates the process and requires the newly duplicated record to somehow be added to the found set in the current window.
Let's try a different approach and think it through in our head before writing any code. If all the operations occur in a single window, found sets can be preserved on layouts. It's a basic functionality of FileMaker that all developers should understand. That means a simple Go to Related Record record step can create a found set of the related records and a Go to Layout with the original layout option can return the focus of the script back to the parent record. Now the related records we want to duplicate are preserved on a layout. As long as the window is not closed or the found set manipulated from any layout based on the same table occurrence, the found set will be there when we actually want to duplicate the related records.
1) Go to Related Record [Show only related records; From table: "CONTACTS"; Using layout: "List" (CONTACTS)] 2) Go to Layout [original layout] 3) Duplicate Record/Request
TIP: Found sets are stored by table occurrence. In other words, if you have two layouts based on the same table occurrence, like a form and a list view, they will share the same found set. However, a layout based on the same source table but a different table occurrence will store it's found set independently.
A Little Knowledge Goes a Long Way This approach of preserving the found set of related records on a layout before duplicating the parent record is a lot more straightforward that the other considerations! Good thing we thought through the process before programming. Now all we need to do is write the portion of the script that comes after the record duplication. This is not as easy as it might seem. Somehow you have to duplicate each record in the found set but only change the foreign key on the duplicated records. This actually turns out to be the easy part. The hard part is knowing which record to duplicate next since duplicated records go to two different locations in the current found set. If the found set is unsorted, the records go to the end of the list. This makes sense since unsorted is the same as creation order and the most recently created record is the newest. If the found set is sorted then the record goes to it's sorted place in the current sort order.
Import? Maybe we can import? I'm thinking import from the related table into the related table. In other words, the source and target are the same table. Brilliant! Not only would the import duplicate the current found set of record but it would make a found set containing just the newly imported records. But woe is me, FileMaker doesn't support imports using the same table as the source and the target. Wow, this would be a great feature! Hint, hint FMI . I guess I could import the data into a temporary table and then back into the child table. I don't have a problem with the two imports but I definitely don't like creating schema unnecessarily. I would prefer a self-contained script. The import approach seems to be a dead end but as you program more, you'll code yourself into a lot of dead ends. This is why I prefer to think through the issue in my head first.
Creation Order Let's use our knowledge of where records are created in the found set to control the duplication. Since unsorted records are inserted at the end of the found set, we could use a loop to go through all the records and duplicate them. The problem is how to get back to the next record to duplicate since the script takes you to last record after duplication. I guess you could increment a counter and go back to the next record to be duplicated. However, this approach creates an exception that prevents straightforward execution. How does the script exit after the last duplicated record? You can't use the "exit after last" option on the Go to Record/Request/Page script step since the loop will never arrive at the last record. What you end up with is an endless loop. To solve this problem, another variable needs to be declared to capture the total records in the found set and then an Exit Loop If step to exit the loop when the found set equals the counter.
FYI: It is very important the records be unsorted or the script may fail. For example, if a user happens to be looking at the same found set of related records in sorted order, the Go to Related Record script step won't bother finding the records again and just keep the same sorted set. Seems unlikely but it's a possibility.
While this script works, I'm not satisfied with the readability. Having to track multiple variables in your head to make sense of a script seems cumbersome. I like my scripts to read naturally whenever possible. It requires less documentation and makes going back six months later to make changes much easier.
Sorted Order If we simply sort the records then the duplicated record will be placed in sorted order, making the script far easier to read. For example, let's say you have a list of contacts to duplicate:
Steven Blackwell Marc Larochelle John Mark Osborne
The first record to be duplicated will be "Steven Blackwell" and since the found set is sorted, the duplicated "Steven Blackwell" will be placed right after the original:
Steven Blackwell Steven Blackwell Marc Larochelle John Mark Osborne
The script then goes to the next record and duplicates again until the last record is reached. Now this is a very clean script!
It's important that the script be sorted in an order that will organize the newly created record after the duplicated record. For example, if you sort the contacts by last name only, a duplicated record with the same last name as other in the found set will go to the end:
John Mark Osborne Kaitlin Osborne Matthew Osborne John Mark Osborne Matt Petrowsky
It's better to sort by last and first name to prevent this issue. But even this mult-level sort doesn't prevent a situation where two people have the same first and last name. Nevertheless, this is a uncommon scenario and as I mentioned before, it produces a cleaner script.
Ahhh, Omit Record Still, I'm not quite satisfied. I've been programming long enough to know the "unlikely" will happen at some point. I'd prefer a bullet proof script so let's try another approach. Again, using our knowledge of how records are duplicated in an unsorted set, let's try a different approach. This time, I want to go with the flow. Instead of trying to relocate focus to the next record to duplicate, I'm going to simply get rid of the original record and the duplicated record using the Omit Record script step. That way, there's no question what needs to be duplicated.
Using the Omit Record script step along with the Go to Record/Request/Page [First] script step allows me to locate the next record to duplicate since it will always be the first record. It's so easy once you think about it. If you still don't understand, turn on the script debugger and watch the script run through the records in slow motion and the light bulb will turn on!
FYI: I originally wrote this portal duplication script using the Omit Record script step back in November of 2003 when I was an editor for the now defunct FileMaker Advisor Magazine. It appeared in the popular Tips & Tricks area which I oversaw for many years.
Also, notice that the Exit Loop If script step is employed to exit the loop. This is required since the Go to Record/Request/Page step with the Next option cannot used. I prefer to avoid performing a calculation every time the script loops but it's a simple formula that shouldn't slow down the loop in any noticeable fashion.
Empty Portal Try running this script on a record with an empty portal. Since the Go to Related Record script step essentially does nothing, the script ends up duplicating all the records in whatever found set was left in the child table. That could be all the records in the child table! Imagine all the child records being duplicated and then connected to the copy of the parent record. What a mess! Here's a simple formula to avoid the issue.
I've seen folks use the Count function but it's slower than IsEmpty since it needs to look at all the records in the relationship. IsEmpty only needs to look at the first record and determine if it exists. I always check the primary key field since I know it's never going to get deleted and it will always have a value. If you check a data entry field, there's a possibility it may not have been filled, causing IsEmpty to return a false result. What we want to check is whether the record exists, not whether the field is empty or filled.
Let's Get Crazy If you really want to take out all the stops and make this script adaptive to the current table and fields, you'll need to observe strict layout and table naming conventions. I always name my tables one word like "COMPANIES" or "CONTACTS", my layouts with the table name at the beginning like "COMPANIES_Form" or "CONTACTS_List" and my primary and foreign keys with the table name like "_kp_companies_id" or "_kf_companies_id". This allows me to abstract my scripts more easily.
Since my layout names contain a one word table name, it's easy to extract the table name to specify for the field reference. I also use the anchor-buoy relational design system so I always know my layouts will be based on a table occurrence using the basic table name. It all flows together if you follow strict naming conventions. As you look through the adaptive version of the script, you'll see I declare various variables to hold information for the script to build field adaptive references and grab field data dynamically.
It was also necessary to switch to a find to locate the related data. Only the layout name can be specified in the Go to Related Record script step... something I'd like to see changed. Using a find makes the script more verbose and also requires the test for an empty portal to change from the IsEmpty function to Get(FoundCount). The flip side is, all you have to do is modify the script parameter when moving the script to a new table. Pretty slick!
Do I make my scripts adaptive? That's a great question. Most of the time my clients don't need more than one portal duplicated and the script is pretty easy to update without all the complicated adaptive code. It literally takes less than a minute to modify the non-adaptive version of the script to a new table. As an added bonus, I find the non-adaptive version of the script easier to decipher when needed. Yes, sometimes dynamic code creates a more efficient programming environment but most of the time I find it's just a waste of time. KISS.
What'd I Learn Yes, we learned how to duplicate a portal along with the parent record. Without discounting this very useful technique, what I really wanted to teach you was the process of thinking through a solution. Don't just settle on the first idea that crosses your mind. Consider multiple approaches and pick the best one. If you don't have a second idea on how to solve your problem then Google it or get on the forums and ask. You'll be much happier you did! And, above all else, don't make your scripts complicated just because it seems cool.